00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2027 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3292 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.099 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.100 The recommended git tool is: git 00:00:00.100 using credential 00000000-0000-0000-0000-000000000002 00:00:00.102 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.135 Fetching changes from the remote Git repository 00:00:00.136 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.171 Using shallow fetch with depth 1 00:00:00.171 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.171 > git --version # timeout=10 00:00:00.194 > git --version # 'git version 2.39.2' 00:00:00.194 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.208 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.208 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.038 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.050 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.062 Checking out Revision 456d80899d5187c68de113852b37bde1201fd33a (FETCH_HEAD) 00:00:04.063 > git config core.sparsecheckout # timeout=10 00:00:04.074 > git read-tree -mu HEAD # timeout=10 00:00:04.091 > git checkout -f 456d80899d5187c68de113852b37bde1201fd33a # timeout=5 00:00:04.134 Commit message: "jenkins/config: Drop WFP25 for maintenance" 00:00:04.135 > git rev-list --no-walk 959192f7ea6ec39575c28d47e59b1947fed9a104 # timeout=10 00:00:04.245 [Pipeline] Start of Pipeline 00:00:04.257 [Pipeline] library 00:00:04.258 Loading library shm_lib@master 00:00:04.258 Library shm_lib@master is cached. Copying from home. 00:00:04.271 [Pipeline] node 00:00:04.282 Running on WFP5 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:04.283 [Pipeline] { 00:00:04.291 [Pipeline] catchError 00:00:04.293 [Pipeline] { 00:00:04.305 [Pipeline] wrap 00:00:04.314 [Pipeline] { 00:00:04.320 [Pipeline] stage 00:00:04.321 [Pipeline] { (Prologue) 00:00:04.490 [Pipeline] sh 00:00:04.763 + logger -p user.info -t JENKINS-CI 00:00:04.783 [Pipeline] echo 00:00:04.785 Node: WFP5 00:00:04.793 [Pipeline] sh 00:00:05.082 [Pipeline] setCustomBuildProperty 00:00:05.090 [Pipeline] echo 00:00:05.091 Cleanup processes 00:00:05.094 [Pipeline] sh 00:00:05.426 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:05.426 1916423 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:05.436 [Pipeline] sh 00:00:05.712 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:05.712 ++ grep -v 'sudo pgrep' 00:00:05.712 ++ awk '{print $1}' 00:00:05.712 + sudo kill -9 00:00:05.712 + true 00:00:05.725 [Pipeline] cleanWs 00:00:05.734 [WS-CLEANUP] Deleting project workspace... 00:00:05.734 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.739 [WS-CLEANUP] done 00:00:05.742 [Pipeline] setCustomBuildProperty 00:00:05.751 [Pipeline] sh 00:00:06.026 + sudo git config --global --replace-all safe.directory '*' 00:00:06.143 [Pipeline] httpRequest 00:00:06.164 [Pipeline] echo 00:00:06.165 Sorcerer 10.211.164.101 is alive 00:00:06.171 [Pipeline] httpRequest 00:00:06.174 HttpMethod: GET 00:00:06.175 URL: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:06.175 Sending request to url: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:06.198 Response Code: HTTP/1.1 200 OK 00:00:06.198 Success: Status code 200 is in the accepted range: 200,404 00:00:06.199 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:28.454 [Pipeline] sh 00:00:28.737 + tar --no-same-owner -xf jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:28.752 [Pipeline] httpRequest 00:00:28.780 [Pipeline] echo 00:00:28.782 Sorcerer 10.211.164.101 is alive 00:00:28.791 [Pipeline] httpRequest 00:00:28.796 HttpMethod: GET 00:00:28.796 URL: http://10.211.164.101/packages/spdk_8711e7e9b320e91cd9789b05190f8c3dbba55125.tar.gz 00:00:28.797 Sending request to url: http://10.211.164.101/packages/spdk_8711e7e9b320e91cd9789b05190f8c3dbba55125.tar.gz 00:00:28.806 Response Code: HTTP/1.1 200 OK 00:00:28.807 Success: Status code 200 is in the accepted range: 200,404 00:00:28.807 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_8711e7e9b320e91cd9789b05190f8c3dbba55125.tar.gz 00:00:57.852 [Pipeline] sh 00:00:58.168 + tar --no-same-owner -xf spdk_8711e7e9b320e91cd9789b05190f8c3dbba55125.tar.gz 00:01:00.715 [Pipeline] sh 00:01:00.998 + git -C spdk log --oneline -n5 00:01:00.998 8711e7e9b autotest: reduce accel tests runs with SPDK_TEST_ACCEL flag 00:01:00.998 50222f810 configure: don't exit on non Intel platforms 00:01:00.998 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:01:00.998 ba69d4678 event/scheduler: remove custom opts from static scheduler 00:01:00.998 79fce488b test/scheduler: test scheduling period with dynamic scheduler 00:01:01.017 [Pipeline] withCredentials 00:01:01.027 > git --version # timeout=10 00:01:01.039 > git --version # 'git version 2.39.2' 00:01:01.057 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:01.060 [Pipeline] { 00:01:01.069 [Pipeline] retry 00:01:01.072 [Pipeline] { 00:01:01.091 [Pipeline] sh 00:01:01.374 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:01.953 [Pipeline] } 00:01:01.977 [Pipeline] // retry 00:01:01.983 [Pipeline] } 00:01:02.004 [Pipeline] // withCredentials 00:01:02.015 [Pipeline] httpRequest 00:01:02.035 [Pipeline] echo 00:01:02.037 Sorcerer 10.211.164.101 is alive 00:01:02.046 [Pipeline] httpRequest 00:01:02.051 HttpMethod: GET 00:01:02.052 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:02.052 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:02.064 Response Code: HTTP/1.1 200 OK 00:01:02.065 Success: Status code 200 is in the accepted range: 200,404 00:01:02.066 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:09.336 [Pipeline] sh 00:01:09.620 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:11.013 [Pipeline] sh 00:01:11.297 + git -C dpdk log --oneline -n5 00:01:11.297 caf0f5d395 version: 22.11.4 00:01:11.297 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:11.297 dc9c799c7d vhost: fix missing spinlock unlock 00:01:11.297 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:11.297 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:11.308 [Pipeline] } 00:01:11.327 [Pipeline] // stage 00:01:11.336 [Pipeline] stage 00:01:11.338 [Pipeline] { (Prepare) 00:01:11.360 [Pipeline] writeFile 00:01:11.378 [Pipeline] sh 00:01:11.659 + logger -p user.info -t JENKINS-CI 00:01:11.671 [Pipeline] sh 00:01:11.949 + logger -p user.info -t JENKINS-CI 00:01:11.962 [Pipeline] sh 00:01:12.244 + cat autorun-spdk.conf 00:01:12.244 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.244 SPDK_TEST_NVMF=1 00:01:12.244 SPDK_TEST_NVME_CLI=1 00:01:12.244 SPDK_TEST_NVMF_NICS=mlx5 00:01:12.244 SPDK_RUN_UBSAN=1 00:01:12.244 NET_TYPE=phy 00:01:12.244 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:12.244 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:12.251 RUN_NIGHTLY=1 00:01:12.255 [Pipeline] readFile 00:01:12.282 [Pipeline] withEnv 00:01:12.284 [Pipeline] { 00:01:12.300 [Pipeline] sh 00:01:12.583 + set -ex 00:01:12.583 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:12.583 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:12.583 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.583 ++ SPDK_TEST_NVMF=1 00:01:12.583 ++ SPDK_TEST_NVME_CLI=1 00:01:12.583 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:12.583 ++ SPDK_RUN_UBSAN=1 00:01:12.583 ++ NET_TYPE=phy 00:01:12.583 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:12.583 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:12.583 ++ RUN_NIGHTLY=1 00:01:12.583 + case $SPDK_TEST_NVMF_NICS in 00:01:12.583 + DRIVERS=mlx5_ib 00:01:12.583 + [[ -n mlx5_ib ]] 00:01:12.583 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:12.583 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:19.151 rmmod: ERROR: Module irdma is not currently loaded 00:01:19.151 rmmod: ERROR: Module i40iw is not currently loaded 00:01:19.151 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:19.151 + true 00:01:19.151 + for D in $DRIVERS 00:01:19.151 + sudo modprobe mlx5_ib 00:01:19.151 + exit 0 00:01:19.160 [Pipeline] } 00:01:19.174 [Pipeline] // withEnv 00:01:19.180 [Pipeline] } 00:01:19.194 [Pipeline] // stage 00:01:19.202 [Pipeline] catchError 00:01:19.204 [Pipeline] { 00:01:19.214 [Pipeline] timeout 00:01:19.215 Timeout set to expire in 1 hr 0 min 00:01:19.216 [Pipeline] { 00:01:19.229 [Pipeline] stage 00:01:19.231 [Pipeline] { (Tests) 00:01:19.246 [Pipeline] sh 00:01:19.528 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:19.528 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:19.528 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:19.528 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:19.528 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:19.528 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:19.528 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:19.528 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:19.528 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:19.528 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:19.528 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:19.528 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:19.528 + source /etc/os-release 00:01:19.528 ++ NAME='Fedora Linux' 00:01:19.528 ++ VERSION='38 (Cloud Edition)' 00:01:19.528 ++ ID=fedora 00:01:19.528 ++ VERSION_ID=38 00:01:19.528 ++ VERSION_CODENAME= 00:01:19.528 ++ PLATFORM_ID=platform:f38 00:01:19.528 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:19.528 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:19.528 ++ LOGO=fedora-logo-icon 00:01:19.528 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:19.528 ++ HOME_URL=https://fedoraproject.org/ 00:01:19.528 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:19.528 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:19.528 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:19.528 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:19.528 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:19.528 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:19.528 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:19.528 ++ SUPPORT_END=2024-05-14 00:01:19.528 ++ VARIANT='Cloud Edition' 00:01:19.528 ++ VARIANT_ID=cloud 00:01:19.528 + uname -a 00:01:19.528 Linux spdk-wfp-05 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:19.528 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:22.058 Hugepages 00:01:22.058 node hugesize free / total 00:01:22.058 node0 1048576kB 0 / 0 00:01:22.058 node0 2048kB 0 / 0 00:01:22.058 node1 1048576kB 0 / 0 00:01:22.058 node1 2048kB 0 / 0 00:01:22.058 00:01:22.058 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:22.058 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:22.058 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:22.058 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:22.058 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:22.058 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:22.058 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:22.058 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:22.058 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:22.058 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:22.058 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:22.058 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:22.058 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:22.058 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:22.058 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:22.058 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:22.058 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:22.058 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:22.058 + rm -f /tmp/spdk-ld-path 00:01:22.058 + source autorun-spdk.conf 00:01:22.058 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.058 ++ SPDK_TEST_NVMF=1 00:01:22.058 ++ SPDK_TEST_NVME_CLI=1 00:01:22.058 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:22.058 ++ SPDK_RUN_UBSAN=1 00:01:22.058 ++ NET_TYPE=phy 00:01:22.058 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:22.058 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:22.058 ++ RUN_NIGHTLY=1 00:01:22.058 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:22.058 + [[ -n '' ]] 00:01:22.058 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:22.058 + for M in /var/spdk/build-*-manifest.txt 00:01:22.058 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:22.058 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:22.058 + for M in /var/spdk/build-*-manifest.txt 00:01:22.058 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:22.058 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:22.058 ++ uname 00:01:22.058 + [[ Linux == \L\i\n\u\x ]] 00:01:22.058 + sudo dmesg -T 00:01:22.058 + sudo dmesg --clear 00:01:22.058 + dmesg_pid=1917390 00:01:22.058 + [[ Fedora Linux == FreeBSD ]] 00:01:22.058 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.058 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.058 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:22.058 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:22.058 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:22.058 + [[ -x /usr/src/fio-static/fio ]] 00:01:22.058 + sudo dmesg -Tw 00:01:22.058 + export FIO_BIN=/usr/src/fio-static/fio 00:01:22.058 + FIO_BIN=/usr/src/fio-static/fio 00:01:22.058 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:22.058 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:22.058 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:22.058 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.058 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.058 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:22.058 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.058 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.058 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:22.058 Test configuration: 00:01:22.058 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.058 SPDK_TEST_NVMF=1 00:01:22.058 SPDK_TEST_NVME_CLI=1 00:01:22.058 SPDK_TEST_NVMF_NICS=mlx5 00:01:22.058 SPDK_RUN_UBSAN=1 00:01:22.058 NET_TYPE=phy 00:01:22.058 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:22.058 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:22.058 RUN_NIGHTLY=1 10:23:29 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:22.058 10:23:29 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:22.058 10:23:29 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:22.058 10:23:29 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:22.058 10:23:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.058 10:23:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.058 10:23:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.058 10:23:29 -- paths/export.sh@5 -- $ export PATH 00:01:22.058 10:23:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.058 10:23:29 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:22.058 10:23:29 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:22.058 10:23:29 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721809409.XXXXXX 00:01:22.058 10:23:29 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721809409.lxUKD9 00:01:22.058 10:23:29 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:22.058 10:23:29 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:01:22.058 10:23:29 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:22.058 10:23:29 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:01:22.058 10:23:29 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:22.058 10:23:29 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:22.058 10:23:29 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:22.058 10:23:29 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:22.058 10:23:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.058 10:23:29 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:01:22.058 10:23:29 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:22.058 10:23:29 -- pm/common@17 -- $ local monitor 00:01:22.058 10:23:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.058 10:23:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.058 10:23:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.058 10:23:29 -- pm/common@21 -- $ date +%s 00:01:22.058 10:23:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.058 10:23:29 -- pm/common@21 -- $ date +%s 00:01:22.058 10:23:29 -- pm/common@25 -- $ sleep 1 00:01:22.058 10:23:29 -- pm/common@21 -- $ date +%s 00:01:22.058 10:23:29 -- pm/common@21 -- $ date +%s 00:01:22.058 10:23:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721809409 00:01:22.058 10:23:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721809409 00:01:22.058 10:23:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721809409 00:01:22.058 10:23:29 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721809409 00:01:22.058 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721809409_collect-vmstat.pm.log 00:01:22.058 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721809409_collect-cpu-temp.pm.log 00:01:22.059 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721809409_collect-cpu-load.pm.log 00:01:22.059 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721809409_collect-bmc-pm.bmc.pm.log 00:01:22.994 10:23:30 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:22.994 10:23:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:22.994 10:23:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:22.994 10:23:30 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:22.994 10:23:30 -- spdk/autobuild.sh@16 -- $ date -u 00:01:22.994 Wed Jul 24 08:23:30 AM UTC 2024 00:01:22.994 10:23:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:22.994 v24.09-pre-311-g8711e7e9b 00:01:22.994 10:23:30 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:22.994 10:23:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:22.994 10:23:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:22.994 10:23:30 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:22.994 10:23:30 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:22.994 10:23:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.994 ************************************ 00:01:22.994 START TEST ubsan 00:01:22.994 ************************************ 00:01:22.994 10:23:30 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:22.994 using ubsan 00:01:22.994 00:01:22.994 real 0m0.000s 00:01:22.994 user 0m0.000s 00:01:22.994 sys 0m0.000s 00:01:22.994 10:23:30 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:22.994 10:23:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:22.994 ************************************ 00:01:22.994 END TEST ubsan 00:01:22.994 ************************************ 00:01:22.994 10:23:30 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:22.994 10:23:30 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:22.994 10:23:30 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:22.994 10:23:30 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:22.994 10:23:30 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:22.994 10:23:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.252 ************************************ 00:01:23.252 START TEST build_native_dpdk 00:01:23.252 ************************************ 00:01:23.252 10:23:30 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:23.252 10:23:30 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:23.252 10:23:30 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:23.252 10:23:30 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:23.252 10:23:30 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:23.252 10:23:30 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:23.252 10:23:30 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:23.252 10:23:30 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:23.252 10:23:30 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:23.252 10:23:30 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:23.252 10:23:30 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:23.252 10:23:30 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:23.252 10:23:30 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:23.252 10:23:30 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:23.252 10:23:30 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:23.252 10:23:30 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:23.252 10:23:30 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:23.252 10:23:30 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:23.252 10:23:30 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/dpdk ]] 00:01:23.252 10:23:30 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:23.252 10:23:30 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk log --oneline -n 5 00:01:23.252 caf0f5d395 version: 22.11.4 00:01:23.252 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:23.253 dc9c799c7d vhost: fix missing spinlock unlock 00:01:23.253 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:23.253 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:23.253 patching file config/rte_config.h 00:01:23.253 Hunk #1 succeeded at 60 (offset 1 line). 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:23.253 10:23:30 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:23.253 patching file lib/pcapng/rte_pcapng.c 00:01:23.253 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:23.253 10:23:30 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:27.470 The Meson build system 00:01:27.470 Version: 1.3.1 00:01:27.470 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:27.470 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp 00:01:27.470 Build type: native build 00:01:27.470 Program cat found: YES (/usr/bin/cat) 00:01:27.470 Project name: DPDK 00:01:27.470 Project version: 22.11.4 00:01:27.470 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:27.470 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:27.470 Host machine cpu family: x86_64 00:01:27.470 Host machine cpu: x86_64 00:01:27.470 Message: ## Building in Developer Mode ## 00:01:27.470 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:27.470 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:27.470 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:27.470 Program objdump found: YES (/usr/bin/objdump) 00:01:27.470 Program python3 found: YES (/usr/bin/python3) 00:01:27.470 Program cat found: YES (/usr/bin/cat) 00:01:27.470 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:27.470 Checking for size of "void *" : 8 00:01:27.470 Checking for size of "void *" : 8 (cached) 00:01:27.470 Library m found: YES 00:01:27.470 Library numa found: YES 00:01:27.470 Has header "numaif.h" : YES 00:01:27.470 Library fdt found: NO 00:01:27.470 Library execinfo found: NO 00:01:27.470 Has header "execinfo.h" : YES 00:01:27.470 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:27.470 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:27.470 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:27.470 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:27.470 Run-time dependency openssl found: YES 3.0.9 00:01:27.470 Run-time dependency libpcap found: YES 1.10.4 00:01:27.470 Has header "pcap.h" with dependency libpcap: YES 00:01:27.470 Compiler for C supports arguments -Wcast-qual: YES 00:01:27.470 Compiler for C supports arguments -Wdeprecated: YES 00:01:27.470 Compiler for C supports arguments -Wformat: YES 00:01:27.470 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:27.470 Compiler for C supports arguments -Wformat-security: NO 00:01:27.470 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:27.470 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:27.470 Compiler for C supports arguments -Wnested-externs: YES 00:01:27.470 Compiler for C supports arguments -Wold-style-definition: YES 00:01:27.470 Compiler for C supports arguments -Wpointer-arith: YES 00:01:27.470 Compiler for C supports arguments -Wsign-compare: YES 00:01:27.470 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:27.470 Compiler for C supports arguments -Wundef: YES 00:01:27.470 Compiler for C supports arguments -Wwrite-strings: YES 00:01:27.470 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:27.470 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:27.470 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:27.470 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:27.470 Compiler for C supports arguments -mavx512f: YES 00:01:27.470 Checking if "AVX512 checking" compiles: YES 00:01:27.470 Fetching value of define "__SSE4_2__" : 1 00:01:27.470 Fetching value of define "__AES__" : 1 00:01:27.470 Fetching value of define "__AVX__" : 1 00:01:27.470 Fetching value of define "__AVX2__" : 1 00:01:27.470 Fetching value of define "__AVX512BW__" : 1 00:01:27.470 Fetching value of define "__AVX512CD__" : 1 00:01:27.470 Fetching value of define "__AVX512DQ__" : 1 00:01:27.470 Fetching value of define "__AVX512F__" : 1 00:01:27.470 Fetching value of define "__AVX512VL__" : 1 00:01:27.470 Fetching value of define "__PCLMUL__" : 1 00:01:27.470 Fetching value of define "__RDRND__" : 1 00:01:27.470 Fetching value of define "__RDSEED__" : 1 00:01:27.470 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:27.470 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:27.470 Message: lib/kvargs: Defining dependency "kvargs" 00:01:27.470 Message: lib/telemetry: Defining dependency "telemetry" 00:01:27.470 Checking for function "getentropy" : YES 00:01:27.470 Message: lib/eal: Defining dependency "eal" 00:01:27.470 Message: lib/ring: Defining dependency "ring" 00:01:27.470 Message: lib/rcu: Defining dependency "rcu" 00:01:27.470 Message: lib/mempool: Defining dependency "mempool" 00:01:27.470 Message: lib/mbuf: Defining dependency "mbuf" 00:01:27.470 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:27.470 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:27.470 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:27.470 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:27.470 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:27.470 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:27.470 Compiler for C supports arguments -mpclmul: YES 00:01:27.470 Compiler for C supports arguments -maes: YES 00:01:27.470 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:27.471 Compiler for C supports arguments -mavx512bw: YES 00:01:27.471 Compiler for C supports arguments -mavx512dq: YES 00:01:27.471 Compiler for C supports arguments -mavx512vl: YES 00:01:27.471 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:27.471 Compiler for C supports arguments -mavx2: YES 00:01:27.471 Compiler for C supports arguments -mavx: YES 00:01:27.471 Message: lib/net: Defining dependency "net" 00:01:27.471 Message: lib/meter: Defining dependency "meter" 00:01:27.471 Message: lib/ethdev: Defining dependency "ethdev" 00:01:27.471 Message: lib/pci: Defining dependency "pci" 00:01:27.471 Message: lib/cmdline: Defining dependency "cmdline" 00:01:27.471 Message: lib/metrics: Defining dependency "metrics" 00:01:27.471 Message: lib/hash: Defining dependency "hash" 00:01:27.471 Message: lib/timer: Defining dependency "timer" 00:01:27.471 Fetching value of define "__AVX2__" : 1 (cached) 00:01:27.471 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:27.471 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:27.471 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:27.471 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:27.471 Message: lib/acl: Defining dependency "acl" 00:01:27.471 Message: lib/bbdev: Defining dependency "bbdev" 00:01:27.471 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:27.471 Run-time dependency libelf found: YES 0.190 00:01:27.471 Message: lib/bpf: Defining dependency "bpf" 00:01:27.471 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:27.471 Message: lib/compressdev: Defining dependency "compressdev" 00:01:27.471 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:27.471 Message: lib/distributor: Defining dependency "distributor" 00:01:27.471 Message: lib/efd: Defining dependency "efd" 00:01:27.471 Message: lib/eventdev: Defining dependency "eventdev" 00:01:27.471 Message: lib/gpudev: Defining dependency "gpudev" 00:01:27.471 Message: lib/gro: Defining dependency "gro" 00:01:27.471 Message: lib/gso: Defining dependency "gso" 00:01:27.471 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:27.471 Message: lib/jobstats: Defining dependency "jobstats" 00:01:27.471 Message: lib/latencystats: Defining dependency "latencystats" 00:01:27.471 Message: lib/lpm: Defining dependency "lpm" 00:01:27.471 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:27.471 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:27.471 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:27.471 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:27.471 Message: lib/member: Defining dependency "member" 00:01:27.471 Message: lib/pcapng: Defining dependency "pcapng" 00:01:27.471 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:27.471 Message: lib/power: Defining dependency "power" 00:01:27.471 Message: lib/rawdev: Defining dependency "rawdev" 00:01:27.471 Message: lib/regexdev: Defining dependency "regexdev" 00:01:27.471 Message: lib/dmadev: Defining dependency "dmadev" 00:01:27.471 Message: lib/rib: Defining dependency "rib" 00:01:27.471 Message: lib/reorder: Defining dependency "reorder" 00:01:27.471 Message: lib/sched: Defining dependency "sched" 00:01:27.471 Message: lib/security: Defining dependency "security" 00:01:27.471 Message: lib/stack: Defining dependency "stack" 00:01:27.471 Has header "linux/userfaultfd.h" : YES 00:01:27.471 Message: lib/vhost: Defining dependency "vhost" 00:01:27.471 Message: lib/ipsec: Defining dependency "ipsec" 00:01:27.471 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:27.471 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:27.471 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:27.471 Message: lib/fib: Defining dependency "fib" 00:01:27.471 Message: lib/port: Defining dependency "port" 00:01:27.471 Message: lib/pdump: Defining dependency "pdump" 00:01:27.471 Message: lib/table: Defining dependency "table" 00:01:27.471 Message: lib/pipeline: Defining dependency "pipeline" 00:01:27.471 Message: lib/graph: Defining dependency "graph" 00:01:27.471 Message: lib/node: Defining dependency "node" 00:01:27.471 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:27.471 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:27.471 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:27.471 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:27.471 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:27.471 Compiler for C supports arguments -Wno-unused-value: YES 00:01:27.471 Compiler for C supports arguments -Wno-format: YES 00:01:27.471 Compiler for C supports arguments -Wno-format-security: YES 00:01:27.471 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:28.405 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:28.405 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:28.405 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:28.405 Fetching value of define "__AVX2__" : 1 (cached) 00:01:28.405 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:28.405 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:28.405 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:28.405 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:28.405 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:28.405 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:28.405 Program doxygen found: YES (/usr/bin/doxygen) 00:01:28.405 Configuring doxy-api.conf using configuration 00:01:28.405 Program sphinx-build found: NO 00:01:28.405 Configuring rte_build_config.h using configuration 00:01:28.405 Message: 00:01:28.405 ================= 00:01:28.405 Applications Enabled 00:01:28.405 ================= 00:01:28.405 00:01:28.405 apps: 00:01:28.405 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:28.405 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:28.405 test-security-perf, 00:01:28.405 00:01:28.405 Message: 00:01:28.405 ================= 00:01:28.405 Libraries Enabled 00:01:28.405 ================= 00:01:28.405 00:01:28.405 libs: 00:01:28.405 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:28.405 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:28.405 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:28.405 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:28.405 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:28.405 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:28.405 table, pipeline, graph, node, 00:01:28.405 00:01:28.405 Message: 00:01:28.405 =============== 00:01:28.405 Drivers Enabled 00:01:28.405 =============== 00:01:28.405 00:01:28.405 common: 00:01:28.405 00:01:28.405 bus: 00:01:28.405 pci, vdev, 00:01:28.405 mempool: 00:01:28.405 ring, 00:01:28.405 dma: 00:01:28.405 00:01:28.405 net: 00:01:28.405 i40e, 00:01:28.405 raw: 00:01:28.405 00:01:28.405 crypto: 00:01:28.405 00:01:28.405 compress: 00:01:28.405 00:01:28.405 regex: 00:01:28.405 00:01:28.405 vdpa: 00:01:28.405 00:01:28.405 event: 00:01:28.405 00:01:28.405 baseband: 00:01:28.405 00:01:28.405 gpu: 00:01:28.405 00:01:28.405 00:01:28.405 Message: 00:01:28.405 ================= 00:01:28.405 Content Skipped 00:01:28.405 ================= 00:01:28.405 00:01:28.405 apps: 00:01:28.405 00:01:28.405 libs: 00:01:28.405 kni: explicitly disabled via build config (deprecated lib) 00:01:28.405 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:28.405 00:01:28.405 drivers: 00:01:28.405 common/cpt: not in enabled drivers build config 00:01:28.405 common/dpaax: not in enabled drivers build config 00:01:28.405 common/iavf: not in enabled drivers build config 00:01:28.405 common/idpf: not in enabled drivers build config 00:01:28.405 common/mvep: not in enabled drivers build config 00:01:28.405 common/octeontx: not in enabled drivers build config 00:01:28.405 bus/auxiliary: not in enabled drivers build config 00:01:28.405 bus/dpaa: not in enabled drivers build config 00:01:28.405 bus/fslmc: not in enabled drivers build config 00:01:28.405 bus/ifpga: not in enabled drivers build config 00:01:28.405 bus/vmbus: not in enabled drivers build config 00:01:28.405 common/cnxk: not in enabled drivers build config 00:01:28.405 common/mlx5: not in enabled drivers build config 00:01:28.405 common/qat: not in enabled drivers build config 00:01:28.405 common/sfc_efx: not in enabled drivers build config 00:01:28.405 mempool/bucket: not in enabled drivers build config 00:01:28.405 mempool/cnxk: not in enabled drivers build config 00:01:28.405 mempool/dpaa: not in enabled drivers build config 00:01:28.405 mempool/dpaa2: not in enabled drivers build config 00:01:28.405 mempool/octeontx: not in enabled drivers build config 00:01:28.405 mempool/stack: not in enabled drivers build config 00:01:28.405 dma/cnxk: not in enabled drivers build config 00:01:28.405 dma/dpaa: not in enabled drivers build config 00:01:28.405 dma/dpaa2: not in enabled drivers build config 00:01:28.405 dma/hisilicon: not in enabled drivers build config 00:01:28.405 dma/idxd: not in enabled drivers build config 00:01:28.405 dma/ioat: not in enabled drivers build config 00:01:28.405 dma/skeleton: not in enabled drivers build config 00:01:28.405 net/af_packet: not in enabled drivers build config 00:01:28.405 net/af_xdp: not in enabled drivers build config 00:01:28.405 net/ark: not in enabled drivers build config 00:01:28.405 net/atlantic: not in enabled drivers build config 00:01:28.405 net/avp: not in enabled drivers build config 00:01:28.405 net/axgbe: not in enabled drivers build config 00:01:28.405 net/bnx2x: not in enabled drivers build config 00:01:28.405 net/bnxt: not in enabled drivers build config 00:01:28.405 net/bonding: not in enabled drivers build config 00:01:28.405 net/cnxk: not in enabled drivers build config 00:01:28.405 net/cxgbe: not in enabled drivers build config 00:01:28.405 net/dpaa: not in enabled drivers build config 00:01:28.405 net/dpaa2: not in enabled drivers build config 00:01:28.405 net/e1000: not in enabled drivers build config 00:01:28.405 net/ena: not in enabled drivers build config 00:01:28.405 net/enetc: not in enabled drivers build config 00:01:28.405 net/enetfec: not in enabled drivers build config 00:01:28.405 net/enic: not in enabled drivers build config 00:01:28.405 net/failsafe: not in enabled drivers build config 00:01:28.405 net/fm10k: not in enabled drivers build config 00:01:28.405 net/gve: not in enabled drivers build config 00:01:28.405 net/hinic: not in enabled drivers build config 00:01:28.405 net/hns3: not in enabled drivers build config 00:01:28.405 net/iavf: not in enabled drivers build config 00:01:28.405 net/ice: not in enabled drivers build config 00:01:28.405 net/idpf: not in enabled drivers build config 00:01:28.405 net/igc: not in enabled drivers build config 00:01:28.405 net/ionic: not in enabled drivers build config 00:01:28.405 net/ipn3ke: not in enabled drivers build config 00:01:28.405 net/ixgbe: not in enabled drivers build config 00:01:28.405 net/kni: not in enabled drivers build config 00:01:28.405 net/liquidio: not in enabled drivers build config 00:01:28.405 net/mana: not in enabled drivers build config 00:01:28.405 net/memif: not in enabled drivers build config 00:01:28.405 net/mlx4: not in enabled drivers build config 00:01:28.405 net/mlx5: not in enabled drivers build config 00:01:28.405 net/mvneta: not in enabled drivers build config 00:01:28.405 net/mvpp2: not in enabled drivers build config 00:01:28.405 net/netvsc: not in enabled drivers build config 00:01:28.405 net/nfb: not in enabled drivers build config 00:01:28.405 net/nfp: not in enabled drivers build config 00:01:28.405 net/ngbe: not in enabled drivers build config 00:01:28.405 net/null: not in enabled drivers build config 00:01:28.405 net/octeontx: not in enabled drivers build config 00:01:28.405 net/octeon_ep: not in enabled drivers build config 00:01:28.405 net/pcap: not in enabled drivers build config 00:01:28.405 net/pfe: not in enabled drivers build config 00:01:28.405 net/qede: not in enabled drivers build config 00:01:28.405 net/ring: not in enabled drivers build config 00:01:28.405 net/sfc: not in enabled drivers build config 00:01:28.405 net/softnic: not in enabled drivers build config 00:01:28.405 net/tap: not in enabled drivers build config 00:01:28.405 net/thunderx: not in enabled drivers build config 00:01:28.405 net/txgbe: not in enabled drivers build config 00:01:28.405 net/vdev_netvsc: not in enabled drivers build config 00:01:28.405 net/vhost: not in enabled drivers build config 00:01:28.405 net/virtio: not in enabled drivers build config 00:01:28.405 net/vmxnet3: not in enabled drivers build config 00:01:28.405 raw/cnxk_bphy: not in enabled drivers build config 00:01:28.405 raw/cnxk_gpio: not in enabled drivers build config 00:01:28.405 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:28.405 raw/ifpga: not in enabled drivers build config 00:01:28.406 raw/ntb: not in enabled drivers build config 00:01:28.406 raw/skeleton: not in enabled drivers build config 00:01:28.406 crypto/armv8: not in enabled drivers build config 00:01:28.406 crypto/bcmfs: not in enabled drivers build config 00:01:28.406 crypto/caam_jr: not in enabled drivers build config 00:01:28.406 crypto/ccp: not in enabled drivers build config 00:01:28.406 crypto/cnxk: not in enabled drivers build config 00:01:28.406 crypto/dpaa_sec: not in enabled drivers build config 00:01:28.406 crypto/dpaa2_sec: not in enabled drivers build config 00:01:28.406 crypto/ipsec_mb: not in enabled drivers build config 00:01:28.406 crypto/mlx5: not in enabled drivers build config 00:01:28.406 crypto/mvsam: not in enabled drivers build config 00:01:28.406 crypto/nitrox: not in enabled drivers build config 00:01:28.406 crypto/null: not in enabled drivers build config 00:01:28.406 crypto/octeontx: not in enabled drivers build config 00:01:28.406 crypto/openssl: not in enabled drivers build config 00:01:28.406 crypto/scheduler: not in enabled drivers build config 00:01:28.406 crypto/uadk: not in enabled drivers build config 00:01:28.406 crypto/virtio: not in enabled drivers build config 00:01:28.406 compress/isal: not in enabled drivers build config 00:01:28.406 compress/mlx5: not in enabled drivers build config 00:01:28.406 compress/octeontx: not in enabled drivers build config 00:01:28.406 compress/zlib: not in enabled drivers build config 00:01:28.406 regex/mlx5: not in enabled drivers build config 00:01:28.406 regex/cn9k: not in enabled drivers build config 00:01:28.406 vdpa/ifc: not in enabled drivers build config 00:01:28.406 vdpa/mlx5: not in enabled drivers build config 00:01:28.406 vdpa/sfc: not in enabled drivers build config 00:01:28.406 event/cnxk: not in enabled drivers build config 00:01:28.406 event/dlb2: not in enabled drivers build config 00:01:28.406 event/dpaa: not in enabled drivers build config 00:01:28.406 event/dpaa2: not in enabled drivers build config 00:01:28.406 event/dsw: not in enabled drivers build config 00:01:28.406 event/opdl: not in enabled drivers build config 00:01:28.406 event/skeleton: not in enabled drivers build config 00:01:28.406 event/sw: not in enabled drivers build config 00:01:28.406 event/octeontx: not in enabled drivers build config 00:01:28.406 baseband/acc: not in enabled drivers build config 00:01:28.406 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:28.406 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:28.406 baseband/la12xx: not in enabled drivers build config 00:01:28.406 baseband/null: not in enabled drivers build config 00:01:28.406 baseband/turbo_sw: not in enabled drivers build config 00:01:28.406 gpu/cuda: not in enabled drivers build config 00:01:28.406 00:01:28.406 00:01:28.406 Build targets in project: 311 00:01:28.406 00:01:28.406 DPDK 22.11.4 00:01:28.406 00:01:28.406 User defined options 00:01:28.406 libdir : lib 00:01:28.406 prefix : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:28.406 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:28.406 c_link_args : 00:01:28.406 enable_docs : false 00:01:28.406 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:28.406 enable_kmods : false 00:01:28.406 machine : native 00:01:28.406 tests : false 00:01:28.406 00:01:28.406 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:28.406 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:28.406 10:23:35 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j96 00:01:28.670 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:01:28.670 [1/740] Generating lib/rte_kvargs_mingw with a custom command 00:01:28.670 [2/740] Generating lib/rte_telemetry_def with a custom command 00:01:28.670 [3/740] Generating lib/rte_kvargs_def with a custom command 00:01:28.670 [4/740] Generating lib/rte_telemetry_mingw with a custom command 00:01:28.670 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:28.670 [6/740] Generating lib/rte_rcu_def with a custom command 00:01:28.670 [7/740] Generating lib/rte_ring_mingw with a custom command 00:01:28.670 [8/740] Generating lib/rte_mempool_def with a custom command 00:01:28.670 [9/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:28.670 [10/740] Generating lib/rte_mbuf_mingw with a custom command 00:01:28.670 [11/740] Generating lib/rte_rcu_mingw with a custom command 00:01:28.670 [12/740] Generating lib/rte_eal_mingw with a custom command 00:01:28.670 [13/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:28.670 [14/740] Generating lib/rte_ring_def with a custom command 00:01:28.670 [15/740] Generating lib/rte_eal_def with a custom command 00:01:28.670 [16/740] Generating lib/rte_mempool_mingw with a custom command 00:01:28.670 [17/740] Generating lib/rte_mbuf_def with a custom command 00:01:28.670 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:28.670 [19/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:28.670 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:28.670 [21/740] Generating lib/rte_net_def with a custom command 00:01:28.670 [22/740] Generating lib/rte_meter_def with a custom command 00:01:28.670 [23/740] Generating lib/rte_meter_mingw with a custom command 00:01:28.670 [24/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:28.670 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:28.670 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:28.670 [27/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:28.670 [28/740] Generating lib/rte_net_mingw with a custom command 00:01:28.670 [29/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:28.670 [30/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:28.670 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:28.670 [32/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:28.670 [33/740] Linking static target lib/librte_kvargs.a 00:01:28.929 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:28.929 [35/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:28.929 [36/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:28.929 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:28.929 [38/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:28.929 [39/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:28.929 [40/740] Generating lib/rte_ethdev_mingw with a custom command 00:01:28.929 [41/740] Generating lib/rte_pci_def with a custom command 00:01:28.929 [42/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:28.929 [43/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:28.929 [44/740] Generating lib/rte_pci_mingw with a custom command 00:01:28.929 [45/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:28.929 [46/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:28.929 [47/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:28.929 [48/740] Generating lib/rte_ethdev_def with a custom command 00:01:28.929 [49/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:28.929 [50/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:28.929 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:28.929 [52/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:28.929 [53/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:28.929 [54/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:28.929 [55/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:28.929 [56/740] Generating lib/rte_cmdline_mingw with a custom command 00:01:28.929 [57/740] Generating lib/rte_cmdline_def with a custom command 00:01:28.929 [58/740] Generating lib/rte_metrics_mingw with a custom command 00:01:28.929 [59/740] Generating lib/rte_metrics_def with a custom command 00:01:28.929 [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:28.929 [61/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:28.929 [62/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:28.929 [63/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:28.929 [64/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:28.929 [65/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:28.929 [66/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:28.929 [67/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:28.929 [68/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:28.929 [69/740] Generating lib/rte_hash_def with a custom command 00:01:28.929 [70/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:28.929 [71/740] Generating lib/rte_hash_mingw with a custom command 00:01:28.929 [72/740] Generating lib/rte_timer_mingw with a custom command 00:01:28.929 [73/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:28.929 [74/740] Generating lib/rte_timer_def with a custom command 00:01:28.929 [75/740] Linking static target lib/librte_pci.a 00:01:28.929 [76/740] Linking static target lib/librte_ring.a 00:01:28.929 [77/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:28.929 [78/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:28.929 [79/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:28.929 [80/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:28.929 [81/740] Linking static target lib/librte_meter.a 00:01:28.929 [82/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:28.929 [83/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:28.929 [84/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:28.929 [85/740] Generating lib/rte_acl_def with a custom command 00:01:28.929 [86/740] Generating lib/rte_acl_mingw with a custom command 00:01:28.929 [87/740] Generating lib/rte_bbdev_def with a custom command 00:01:28.929 [88/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:28.929 [89/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:28.929 [90/740] Generating lib/rte_bbdev_mingw with a custom command 00:01:28.929 [91/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:28.929 [92/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:28.929 [93/740] Generating lib/rte_bitratestats_def with a custom command 00:01:28.929 [94/740] Generating lib/rte_bitratestats_mingw with a custom command 00:01:28.929 [95/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:28.929 [96/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:28.929 [97/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:28.929 [98/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:29.192 [99/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:29.192 [100/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:29.192 [101/740] Generating lib/rte_bpf_def with a custom command 00:01:29.192 [102/740] Generating lib/rte_bpf_mingw with a custom command 00:01:29.192 [103/740] Generating lib/rte_cfgfile_def with a custom command 00:01:29.192 [104/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:29.192 [105/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:29.192 [106/740] Generating lib/rte_cfgfile_mingw with a custom command 00:01:29.192 [107/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:29.192 [108/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:29.192 [109/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:29.192 [110/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:29.192 [111/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:29.192 [112/740] Generating lib/rte_compressdev_def with a custom command 00:01:29.192 [113/740] Generating lib/rte_compressdev_mingw with a custom command 00:01:29.192 [114/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:29.192 [115/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:29.192 [116/740] Generating lib/rte_cryptodev_def with a custom command 00:01:29.192 [117/740] Generating lib/rte_cryptodev_mingw with a custom command 00:01:29.192 [118/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:29.192 [119/740] Generating lib/rte_distributor_def with a custom command 00:01:29.192 [120/740] Generating lib/rte_distributor_mingw with a custom command 00:01:29.192 [121/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:29.192 [122/740] Generating lib/rte_efd_def with a custom command 00:01:29.192 [123/740] Generating lib/rte_efd_mingw with a custom command 00:01:29.192 [124/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:29.192 [125/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.192 [126/740] Generating lib/rte_eventdev_def with a custom command 00:01:29.192 [127/740] Generating lib/rte_eventdev_mingw with a custom command 00:01:29.192 [128/740] Linking target lib/librte_kvargs.so.23.0 00:01:29.192 [129/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.192 [130/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:29.192 [131/740] Generating lib/rte_gpudev_def with a custom command 00:01:29.192 [132/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.457 [133/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:29.457 [134/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.457 [135/740] Generating lib/rte_gpudev_mingw with a custom command 00:01:29.457 [136/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:29.457 [137/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:29.457 [138/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:29.457 [139/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:29.457 [140/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:29.457 [141/740] Generating lib/rte_gro_mingw with a custom command 00:01:29.457 [142/740] Generating lib/rte_gro_def with a custom command 00:01:29.457 [143/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:29.457 [144/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:29.457 [145/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:29.457 [146/740] Generating lib/rte_gso_def with a custom command 00:01:29.457 [147/740] Generating lib/rte_gso_mingw with a custom command 00:01:29.457 [148/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:29.457 [149/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:29.457 [150/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:29.457 [151/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:29.457 [152/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:29.457 [153/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:29.457 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:29.457 [155/740] Linking static target lib/librte_cfgfile.a 00:01:29.457 [156/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:29.457 [157/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:29.457 [158/740] Generating lib/rte_ip_frag_def with a custom command 00:01:29.457 [159/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:29.457 [160/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:29.457 [161/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:29.457 [162/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:29.457 [163/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:29.457 [164/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:29.457 [165/740] Generating lib/rte_ip_frag_mingw with a custom command 00:01:29.457 [166/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:29.457 [167/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:29.457 [168/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:29.457 [169/740] Linking static target lib/librte_cmdline.a 00:01:29.457 [170/740] Linking static target lib/librte_metrics.a 00:01:29.457 [171/740] Generating lib/rte_jobstats_def with a custom command 00:01:29.457 [172/740] Generating lib/rte_jobstats_mingw with a custom command 00:01:29.457 [173/740] Generating lib/rte_latencystats_mingw with a custom command 00:01:29.457 [174/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:29.457 [175/740] Generating lib/rte_latencystats_def with a custom command 00:01:29.717 [176/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:29.717 [177/740] Generating lib/rte_lpm_mingw with a custom command 00:01:29.717 [178/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:29.717 [179/740] Generating lib/rte_lpm_def with a custom command 00:01:29.717 [180/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:29.717 [181/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:29.717 [182/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:29.717 [183/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:29.717 [184/740] Linking static target lib/librte_timer.a 00:01:29.717 [185/740] Generating lib/rte_member_def with a custom command 00:01:29.717 [186/740] Generating lib/rte_member_mingw with a custom command 00:01:29.717 [187/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:29.717 [188/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:29.717 [189/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:29.717 [190/740] Generating lib/rte_pcapng_def with a custom command 00:01:29.717 [191/740] Generating lib/rte_pcapng_mingw with a custom command 00:01:29.717 [192/740] Linking static target lib/librte_telemetry.a 00:01:29.717 [193/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:29.717 [194/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:29.717 [195/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:29.717 [196/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:29.717 [197/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:29.717 [198/740] Linking static target lib/librte_net.a 00:01:29.717 [199/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:29.717 [200/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:29.717 [201/740] Linking static target lib/librte_bitratestats.a 00:01:29.717 [202/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:29.717 [203/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:29.717 [204/740] Generating lib/rte_power_def with a custom command 00:01:29.717 [205/740] Generating lib/rte_power_mingw with a custom command 00:01:29.717 [206/740] Linking static target lib/librte_jobstats.a 00:01:29.717 [207/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:29.717 [208/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:29.717 [209/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:29.717 [210/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:29.717 [211/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:29.717 [212/740] Generating lib/rte_rawdev_def with a custom command 00:01:29.717 [213/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:29.717 [214/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:29.717 [215/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:29.717 [216/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:29.717 [217/740] Generating lib/rte_rawdev_mingw with a custom command 00:01:29.717 [218/740] Generating lib/rte_regexdev_def with a custom command 00:01:29.717 [219/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:29.717 [220/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:29.717 [221/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:29.717 [222/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:29.717 [223/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:29.717 [224/740] Generating lib/rte_dmadev_mingw with a custom command 00:01:29.717 [225/740] Generating lib/rte_rib_def with a custom command 00:01:29.717 [226/740] Generating lib/rte_dmadev_def with a custom command 00:01:29.717 [227/740] Generating lib/rte_regexdev_mingw with a custom command 00:01:29.717 [228/740] Generating lib/rte_rib_mingw with a custom command 00:01:29.717 [229/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:29.717 [230/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:29.717 [231/740] Generating lib/rte_reorder_def with a custom command 00:01:29.717 [232/740] Generating lib/rte_reorder_mingw with a custom command 00:01:29.976 [233/740] Generating lib/rte_sched_mingw with a custom command 00:01:29.976 [234/740] Generating lib/rte_sched_def with a custom command 00:01:29.976 [235/740] Generating lib/rte_security_def with a custom command 00:01:29.976 [236/740] Generating lib/rte_security_mingw with a custom command 00:01:29.976 [237/740] Generating lib/rte_stack_def with a custom command 00:01:29.976 [238/740] Generating lib/rte_stack_mingw with a custom command 00:01:29.976 [239/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:29.976 [240/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:29.976 [241/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:29.976 [242/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:29.976 [243/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:29.976 [244/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:29.976 [245/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:29.976 [246/740] Generating lib/rte_vhost_def with a custom command 00:01:29.976 [247/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:29.976 [248/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:29.976 [249/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:29.976 [250/740] Generating lib/rte_vhost_mingw with a custom command 00:01:29.976 [251/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:29.976 [252/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:29.976 [253/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:29.976 [254/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:29.976 [255/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.976 [256/740] Linking static target lib/librte_compressdev.a 00:01:29.976 [257/740] Linking static target lib/librte_stack.a 00:01:29.976 [258/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:29.976 [259/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.976 [260/740] Generating lib/rte_ipsec_def with a custom command 00:01:29.976 [261/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:29.976 [262/740] Generating lib/rte_ipsec_mingw with a custom command 00:01:29.976 [263/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:29.976 [264/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.976 [265/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:29.976 [266/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:29.976 [267/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:29.976 [268/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:29.976 [269/740] Generating lib/rte_fib_def with a custom command 00:01:29.976 [270/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:29.976 [271/740] Generating lib/rte_fib_mingw with a custom command 00:01:29.976 [272/740] Linking static target lib/librte_mempool.a 00:01:30.243 [273/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.243 [274/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.243 [275/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:30.243 [276/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:30.243 [277/740] Linking static target lib/librte_rcu.a 00:01:30.243 [278/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:30.243 [279/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.243 [280/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:30.243 [281/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:30.243 [282/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:30.243 [283/740] Linking static target lib/librte_bbdev.a 00:01:30.243 [284/740] Generating lib/rte_port_def with a custom command 00:01:30.243 [285/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.243 [286/740] Generating lib/rte_port_mingw with a custom command 00:01:30.243 [287/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:30.243 [288/740] Linking target lib/librte_telemetry.so.23.0 00:01:30.243 [289/740] Linking static target lib/librte_rawdev.a 00:01:30.243 [290/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:30.243 [291/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:30.243 [292/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:30.243 [293/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:30.243 [294/740] Generating lib/rte_pdump_def with a custom command 00:01:30.243 [295/740] Generating lib/rte_pdump_mingw with a custom command 00:01:30.243 [296/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:30.243 [297/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.243 [298/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:30.243 [299/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:30.243 [300/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:30.243 [301/740] Linking static target lib/librte_gpudev.a 00:01:30.243 [302/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:30.243 [303/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:30.243 [304/740] Linking static target lib/librte_dmadev.a 00:01:30.500 [305/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:30.500 [306/740] Linking static target lib/librte_distributor.a 00:01:30.500 [307/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:30.500 [308/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:30.500 [309/740] Linking static target lib/librte_gro.a 00:01:30.500 [310/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:30.500 [311/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:30.500 [312/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:30.500 [313/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:30.500 [314/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:30.500 [315/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:30.500 [316/740] Linking static target lib/librte_latencystats.a 00:01:30.500 [317/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:30.500 [318/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:30.500 [319/740] Linking static target lib/librte_gso.a 00:01:30.500 [320/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:30.500 [321/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:30.500 [322/740] Generating lib/rte_table_def with a custom command 00:01:30.500 [323/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:30.500 [324/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:30.500 [325/740] Generating lib/rte_table_mingw with a custom command 00:01:30.500 [326/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:30.500 [327/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:30.500 [328/740] Linking static target lib/librte_eal.a 00:01:30.500 [329/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.500 [330/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:30.759 [331/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:30.759 [332/740] Linking static target lib/librte_regexdev.a 00:01:30.759 [333/740] Generating lib/rte_pipeline_mingw with a custom command 00:01:30.759 [334/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:30.759 [335/740] Generating lib/rte_pipeline_def with a custom command 00:01:30.759 [336/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:30.759 [337/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:30.759 [338/740] Linking static target lib/librte_ip_frag.a 00:01:30.759 [339/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:30.759 [340/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.759 [341/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:30.759 [342/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:30.759 [343/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:30.759 [344/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:30.759 [345/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:30.759 [346/740] Linking static target lib/librte_power.a 00:01:30.759 [347/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:30.759 [348/740] Generating lib/rte_graph_def with a custom command 00:01:30.759 [349/740] Generating lib/rte_graph_mingw with a custom command 00:01:30.759 [350/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.759 [351/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:30.759 [352/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.759 [353/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:30.759 [354/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.759 [355/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:30.759 [356/740] Linking static target lib/librte_reorder.a 00:01:30.759 [357/740] Linking static target lib/librte_mbuf.a 00:01:30.759 [358/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:30.759 [359/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:30.759 [360/740] Linking static target lib/librte_pcapng.a 00:01:30.759 [361/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:31.021 [362/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:31.021 [363/740] Generating lib/rte_node_def with a custom command 00:01:31.021 [364/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:31.021 [365/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.021 [366/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:31.021 [367/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:31.021 [368/740] Linking static target lib/librte_security.a 00:01:31.021 [369/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:31.022 [370/740] Linking static target lib/librte_bpf.a 00:01:31.022 [371/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.022 [372/740] Generating lib/rte_node_mingw with a custom command 00:01:31.022 [373/740] Generating drivers/rte_bus_pci_def with a custom command 00:01:31.022 [374/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:31.022 [375/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:31.022 [376/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:31.022 [377/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:31.022 [378/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:31.022 [379/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:31.022 [380/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.022 [381/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:31.022 [382/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:31.022 [383/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:31.022 [384/740] Generating drivers/rte_bus_vdev_def with a custom command 00:01:31.022 [385/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:31.283 [386/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:31.283 [387/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:31.283 [388/740] Generating drivers/rte_mempool_ring_def with a custom command 00:01:31.283 [389/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:31.283 [390/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.283 [391/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.283 [392/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:31.283 [393/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:31.283 [394/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:31.283 [395/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.283 [396/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:31.283 [397/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.283 [398/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:31.283 [399/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:31.283 [400/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:31.283 [401/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:31.283 [402/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:31.283 [403/740] Linking static target lib/librte_rib.a 00:01:31.283 [404/740] Linking static target lib/librte_lpm.a 00:01:31.283 [405/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:31.283 [406/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:31.283 [407/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:31.283 [408/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:31.283 [409/740] Generating drivers/rte_net_i40e_def with a custom command 00:01:31.283 [410/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.283 [411/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:31.283 [412/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:31.283 [413/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:31.283 [414/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.283 [415/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:31.283 [416/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:31.283 [417/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:31.283 [418/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:31.283 [419/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:31.283 [420/740] Linking static target lib/librte_efd.a 00:01:31.283 [421/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:31.542 [422/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:31.542 [423/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:31.542 [424/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.542 [425/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:31.542 [426/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:31.542 [427/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:31.542 [428/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:31.542 [429/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:31.542 [430/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:31.542 [431/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:31.542 [432/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:31.542 [433/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:31.542 [434/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:31.542 [435/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:31.542 [436/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:31.542 [437/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:31.542 [438/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:31.542 [439/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:31.542 [440/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:31.542 [441/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:31.542 [442/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.542 [443/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:31.542 [444/740] Linking static target lib/librte_graph.a 00:01:31.805 [445/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:31.805 [446/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.805 [447/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:31.805 [448/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:31.805 [449/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.805 [450/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.805 [451/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:31.805 [452/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:31.805 [453/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:31.805 [454/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:31.805 [455/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:31.805 [456/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.805 [457/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.805 [458/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:31.805 [459/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:31.805 [460/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:31.805 [461/740] Linking static target lib/librte_fib.a 00:01:31.805 [462/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:31.805 [463/740] Linking static target drivers/librte_bus_vdev.a 00:01:31.805 [464/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:31.805 [465/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:31.805 [466/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.067 [467/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:32.067 [468/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:32.067 [469/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:32.067 [470/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:32.067 [471/740] Linking static target lib/librte_pdump.a 00:01:32.067 [472/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:32.067 [473/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:32.067 [474/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:32.067 [475/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:32.067 [476/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:32.325 [477/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:32.325 [478/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:32.325 [479/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.325 [480/740] Linking static target drivers/librte_bus_pci.a 00:01:32.325 [481/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:32.325 [482/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:32.325 [483/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.325 [484/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:32.325 [485/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:32.325 [486/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:32.325 [487/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:32.325 [488/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:32.325 [489/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:32.325 [490/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:32.325 [491/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:32.590 [492/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:32.590 [493/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.590 [494/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:32.590 [495/740] Linking static target lib/librte_table.a 00:01:32.590 [496/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:32.590 [497/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:32.590 [498/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.590 [499/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:32.590 [500/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:32.590 [501/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:32.590 [502/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:32.590 [503/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:32.590 [504/740] Linking static target lib/librte_cryptodev.a 00:01:32.590 [505/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:32.590 [506/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:32.590 [507/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:32.590 [508/740] Linking static target lib/librte_sched.a 00:01:32.590 [509/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:32.590 [510/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:32.848 [511/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:32.848 [512/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:32.848 [513/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:32.848 [514/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:32.848 [515/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:32.848 [516/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:32.848 [517/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:32.848 [518/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:32.848 [519/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:32.848 [520/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:32.848 [521/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:32.848 [522/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:32.848 [523/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.848 [524/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:32.848 [525/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:32.848 [526/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:32.848 [527/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:32.848 [528/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:32.848 [529/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:32.848 [530/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:32.848 [531/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:32.848 [532/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:32.848 [533/740] Linking static target lib/librte_ethdev.a 00:01:32.848 [534/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:32.848 [535/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:32.848 [536/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:32.848 [537/740] Linking static target lib/librte_ipsec.a 00:01:33.106 [538/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:33.106 [539/740] Linking static target lib/librte_node.a 00:01:33.106 [540/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:33.106 [541/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:33.106 [542/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:33.106 [543/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.106 [544/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:33.106 [545/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:33.106 [546/740] Linking static target lib/librte_member.a 00:01:33.106 [547/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:33.106 [548/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:33.106 [549/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:33.106 [550/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:33.106 [551/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:33.106 [552/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.106 [553/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:33.106 [554/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:33.106 [555/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:33.106 [556/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:33.364 [557/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:33.364 [558/740] Linking static target drivers/librte_mempool_ring.a 00:01:33.364 [559/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.364 [560/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:33.364 [561/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:33.364 [562/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:33.364 [563/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:33.364 [564/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.364 [565/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:33.364 [566/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.364 [567/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:33.364 [568/740] Linking static target lib/librte_hash.a 00:01:33.364 [569/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:33.364 [570/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:01:33.364 [571/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:33.364 [572/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:33.364 [573/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:33.364 [574/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:33.364 [575/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:33.364 [576/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:33.364 [577/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:33.364 [578/740] Linking static target lib/librte_port.a 00:01:33.364 [579/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:33.364 [580/740] Linking static target lib/librte_eventdev.a 00:01:33.364 [581/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.364 [582/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:33.364 [583/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:33.364 [584/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:33.364 [585/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:33.622 [586/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:33.622 [587/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:33.622 [588/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:33.622 [589/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:33.622 [590/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:33.622 [591/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:33.622 [592/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:33.622 [593/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:33.622 [594/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:33.879 [595/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:33.879 [596/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:33.879 [597/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:33.879 [598/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:33.879 [599/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:33.879 [600/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:33.879 [601/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:33.879 [602/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:33.879 [603/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:33.879 [604/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:33.879 [605/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:33.879 [606/740] Linking static target lib/librte_acl.a 00:01:34.137 [607/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:34.137 [608/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:01:34.137 [609/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:34.137 [610/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:34.137 [611/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.395 [612/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.395 [613/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:34.395 [614/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.395 [615/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:34.653 [616/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:34.653 [617/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:34.912 [618/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:35.478 [619/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:35.478 [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:35.478 [621/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:35.736 [622/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.996 [623/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:35.996 [624/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:36.255 [625/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.512 [626/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:36.512 [627/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:36.512 [628/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:36.512 [629/740] Linking static target drivers/librte_net_i40e.a 00:01:36.770 [630/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:37.336 [631/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.336 [632/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:37.594 [633/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:40.125 [634/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.501 [635/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.774 [636/740] Linking target lib/librte_eal.so.23.0 00:01:41.774 [637/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:41.774 [638/740] Linking target lib/librte_ring.so.23.0 00:01:41.774 [639/740] Linking target lib/librte_dmadev.so.23.0 00:01:41.774 [640/740] Linking target lib/librte_meter.so.23.0 00:01:41.774 [641/740] Linking target lib/librte_timer.so.23.0 00:01:41.774 [642/740] Linking target lib/librte_pci.so.23.0 00:01:41.774 [643/740] Linking target lib/librte_cfgfile.so.23.0 00:01:41.774 [644/740] Linking target lib/librte_jobstats.so.23.0 00:01:41.774 [645/740] Linking target lib/librte_stack.so.23.0 00:01:41.774 [646/740] Linking target drivers/librte_bus_vdev.so.23.0 00:01:41.774 [647/740] Linking target lib/librte_rawdev.so.23.0 00:01:41.774 [648/740] Linking target lib/librte_graph.so.23.0 00:01:41.774 [649/740] Linking target lib/librte_acl.so.23.0 00:01:42.032 [650/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:42.032 [651/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:42.032 [652/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:42.032 [653/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:42.032 [654/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:42.032 [655/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:42.032 [656/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:42.032 [657/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:42.032 [658/740] Linking target drivers/librte_bus_pci.so.23.0 00:01:42.032 [659/740] Linking target lib/librte_rcu.so.23.0 00:01:42.032 [660/740] Linking target lib/librte_mempool.so.23.0 00:01:42.032 [661/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:42.291 [662/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:42.291 [663/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:42.291 [664/740] Linking target lib/librte_mbuf.so.23.0 00:01:42.291 [665/740] Linking target lib/librte_rib.so.23.0 00:01:42.291 [666/740] Linking target drivers/librte_mempool_ring.so.23.0 00:01:42.291 [667/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:42.291 [668/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:42.291 [669/740] Linking target lib/librte_net.so.23.0 00:01:42.291 [670/740] Linking target lib/librte_compressdev.so.23.0 00:01:42.291 [671/740] Linking target lib/librte_bbdev.so.23.0 00:01:42.291 [672/740] Linking target lib/librte_regexdev.so.23.0 00:01:42.291 [673/740] Linking target lib/librte_gpudev.so.23.0 00:01:42.291 [674/740] Linking target lib/librte_reorder.so.23.0 00:01:42.291 [675/740] Linking target lib/librte_distributor.so.23.0 00:01:42.291 [676/740] Linking target lib/librte_fib.so.23.0 00:01:42.291 [677/740] Linking target lib/librte_sched.so.23.0 00:01:42.291 [678/740] Linking target lib/librte_cryptodev.so.23.0 00:01:42.550 [679/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:42.550 [680/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:42.550 [681/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:42.550 [682/740] Linking target lib/librte_hash.so.23.0 00:01:42.550 [683/740] Linking target lib/librte_cmdline.so.23.0 00:01:42.550 [684/740] Linking target lib/librte_ethdev.so.23.0 00:01:42.550 [685/740] Linking target lib/librte_security.so.23.0 00:01:42.550 [686/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:42.808 [687/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:42.808 [688/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:42.808 [689/740] Linking target lib/librte_lpm.so.23.0 00:01:42.808 [690/740] Linking target lib/librte_efd.so.23.0 00:01:42.808 [691/740] Linking target lib/librte_member.so.23.0 00:01:42.808 [692/740] Linking target lib/librte_metrics.so.23.0 00:01:42.808 [693/740] Linking target lib/librte_eventdev.so.23.0 00:01:42.808 [694/740] Linking target lib/librte_gso.so.23.0 00:01:42.808 [695/740] Linking target lib/librte_pcapng.so.23.0 00:01:42.808 [696/740] Linking target lib/librte_gro.so.23.0 00:01:42.808 [697/740] Linking target lib/librte_bpf.so.23.0 00:01:42.808 [698/740] Linking target lib/librte_ipsec.so.23.0 00:01:42.808 [699/740] Linking target lib/librte_ip_frag.so.23.0 00:01:42.808 [700/740] Linking target lib/librte_power.so.23.0 00:01:42.808 [701/740] Linking target drivers/librte_net_i40e.so.23.0 00:01:42.808 [702/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:42.808 [703/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:42.808 [704/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:42.808 [705/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:42.808 [706/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:42.808 [707/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:42.808 [708/740] Linking target lib/librte_node.so.23.0 00:01:42.809 [709/740] Linking target lib/librte_bitratestats.so.23.0 00:01:42.809 [710/740] Linking target lib/librte_pdump.so.23.0 00:01:42.809 [711/740] Linking target lib/librte_latencystats.so.23.0 00:01:42.809 [712/740] Linking target lib/librte_port.so.23.0 00:01:43.067 [713/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:43.067 [714/740] Linking target lib/librte_table.so.23.0 00:01:43.325 [715/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:43.584 [716/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:43.584 [717/740] Linking static target lib/librte_vhost.a 00:01:44.518 [718/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:44.518 [719/740] Linking static target lib/librte_pipeline.a 00:01:44.776 [720/740] Linking target app/dpdk-test-gpudev 00:01:44.776 [721/740] Linking target app/dpdk-test-acl 00:01:44.776 [722/740] Linking target app/dpdk-test-cmdline 00:01:44.776 [723/740] Linking target app/dpdk-test-crypto-perf 00:01:44.776 [724/740] Linking target app/dpdk-proc-info 00:01:44.776 [725/740] Linking target app/dpdk-test-compress-perf 00:01:44.776 [726/740] Linking target app/dpdk-test-sad 00:01:44.776 [727/740] Linking target app/dpdk-test-pipeline 00:01:44.776 [728/740] Linking target app/dpdk-test-security-perf 00:01:44.776 [729/740] Linking target app/dpdk-pdump 00:01:44.776 [730/740] Linking target app/dpdk-dumpcap 00:01:44.776 [731/740] Linking target app/dpdk-test-regex 00:01:44.776 [732/740] Linking target app/dpdk-test-fib 00:01:44.776 [733/740] Linking target app/dpdk-test-flow-perf 00:01:44.776 [734/740] Linking target app/dpdk-test-bbdev 00:01:44.776 [735/740] Linking target app/dpdk-test-eventdev 00:01:44.776 [736/740] Linking target app/dpdk-testpmd 00:01:45.343 [737/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.343 [738/740] Linking target lib/librte_vhost.so.23.0 00:01:48.688 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.688 [740/740] Linking target lib/librte_pipeline.so.23.0 00:01:48.688 10:23:56 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:01:48.688 10:23:56 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:48.688 10:23:56 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j96 install 00:01:48.688 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:01:48.688 [0/1] Installing files. 00:01:48.950 Installing subdir /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:48.950 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.951 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:48.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:48.955 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:48.955 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.955 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.955 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.955 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.955 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.955 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.955 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.955 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.955 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.955 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.955 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.955 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.955 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.955 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.955 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:48.956 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:01:49.218 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:01:49.218 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:01:49.218 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.218 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:01:49.218 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:49.218 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:49.218 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:49.218 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:49.218 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:49.218 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:49.218 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:49.218 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:49.218 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:49.218 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:49.218 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:49.218 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:49.218 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:49.218 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:49.218 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:49.218 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:49.218 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:49.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:49.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:49.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:49.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:49.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:49.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:49.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:49.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:49.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:49.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:49.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:49.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:49.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:01:49.222 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:01:49.222 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:01:49.222 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:49.222 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:01:49.222 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:49.222 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:01:49.222 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:49.222 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:01:49.222 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:49.222 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:01:49.222 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:49.222 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:01:49.222 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:49.222 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:01:49.222 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:49.222 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so.23 00:01:49.222 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so 00:01:49.223 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:01:49.223 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:49.223 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:01:49.223 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:49.223 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:01:49.223 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:49.223 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:01:49.223 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:49.223 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:01:49.223 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:49.223 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:01:49.223 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:49.223 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:01:49.223 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:49.223 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:01:49.223 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:49.223 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:01:49.223 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:49.223 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:01:49.223 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:49.223 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:01:49.223 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:49.223 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:01:49.223 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:49.223 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:01:49.223 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:49.223 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:01:49.223 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:49.223 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:01:49.223 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:49.223 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:01:49.223 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:49.223 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:01:49.223 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:49.223 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:01:49.223 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:49.223 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:01:49.223 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:49.223 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:01:49.223 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:49.223 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:01:49.223 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:49.223 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:01:49.223 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:49.223 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:01:49.223 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:49.223 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:01:49.223 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:49.223 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so.23 00:01:49.223 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so 00:01:49.223 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:01:49.223 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:49.223 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so.23 00:01:49.223 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so 00:01:49.223 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:01:49.223 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:49.223 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:01:49.223 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:49.223 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:01:49.223 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:49.223 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:01:49.223 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:49.223 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:01:49.223 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:49.223 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:01:49.224 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:49.224 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so.23 00:01:49.224 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so 00:01:49.224 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:01:49.224 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:49.224 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:01:49.224 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:49.224 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:01:49.224 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:49.224 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:01:49.224 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:49.224 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so.23 00:01:49.224 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so 00:01:49.224 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:01:49.224 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:01:49.224 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:01:49.224 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:01:49.224 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:01:49.224 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:01:49.224 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:01:49.224 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:01:49.224 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:01:49.224 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:01:49.224 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:01:49.224 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:01:49.224 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:01:49.224 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:49.224 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so.23 00:01:49.224 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so 00:01:49.224 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:01:49.224 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:49.224 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:01:49.224 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:49.224 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so.23 00:01:49.224 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so 00:01:49.224 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:01:49.224 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:01:49.224 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:01:49.224 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:01:49.224 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:01:49.224 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:01:49.224 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:01:49.224 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:01:49.224 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:01:49.224 10:23:56 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:01:49.224 10:23:56 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:49.224 00:01:49.224 real 0m26.159s 00:01:49.224 user 7m20.882s 00:01:49.224 sys 1m47.642s 00:01:49.224 10:23:56 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:49.224 10:23:56 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:01:49.224 ************************************ 00:01:49.224 END TEST build_native_dpdk 00:01:49.224 ************************************ 00:01:49.224 10:23:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:49.224 10:23:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:49.224 10:23:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:49.224 10:23:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:49.224 10:23:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:49.224 10:23:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:49.224 10:23:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:49.224 10:23:56 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --with-shared 00:01:49.483 Using /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:49.483 DPDK libraries: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:49.483 DPDK includes: //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:49.483 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:49.741 Using 'verbs' RDMA provider 00:02:02.885 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:15.090 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:15.090 Creating mk/config.mk...done. 00:02:15.090 Creating mk/cc.flags.mk...done. 00:02:15.090 Type 'make' to build. 00:02:15.090 10:24:20 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:02:15.090 10:24:20 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:15.090 10:24:20 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:15.090 10:24:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.090 ************************************ 00:02:15.090 START TEST make 00:02:15.090 ************************************ 00:02:15.090 10:24:20 make -- common/autotest_common.sh@1125 -- $ make -j96 00:02:15.090 make[1]: Nothing to be done for 'all'. 00:02:25.053 CC lib/ut/ut.o 00:02:25.053 CC lib/ut_mock/mock.o 00:02:25.053 CC lib/log/log.o 00:02:25.053 CC lib/log/log_deprecated.o 00:02:25.053 CC lib/log/log_flags.o 00:02:25.053 LIB libspdk_ut.a 00:02:25.053 SO libspdk_ut.so.2.0 00:02:25.053 LIB libspdk_ut_mock.a 00:02:25.053 LIB libspdk_log.a 00:02:25.053 SO libspdk_ut_mock.so.6.0 00:02:25.053 SYMLINK libspdk_ut.so 00:02:25.053 SO libspdk_log.so.7.0 00:02:25.053 SYMLINK libspdk_ut_mock.so 00:02:25.053 SYMLINK libspdk_log.so 00:02:25.053 CC lib/ioat/ioat.o 00:02:25.053 CXX lib/trace_parser/trace.o 00:02:25.053 CC lib/dma/dma.o 00:02:25.053 CC lib/util/base64.o 00:02:25.053 CC lib/util/bit_array.o 00:02:25.053 CC lib/util/cpuset.o 00:02:25.053 CC lib/util/crc32.o 00:02:25.053 CC lib/util/crc16.o 00:02:25.053 CC lib/util/crc32c.o 00:02:25.053 CC lib/util/crc32_ieee.o 00:02:25.053 CC lib/util/crc64.o 00:02:25.053 CC lib/util/dif.o 00:02:25.053 CC lib/util/fd.o 00:02:25.053 CC lib/util/fd_group.o 00:02:25.053 CC lib/util/file.o 00:02:25.053 CC lib/util/hexlify.o 00:02:25.053 CC lib/util/iov.o 00:02:25.053 CC lib/util/math.o 00:02:25.053 CC lib/util/net.o 00:02:25.053 CC lib/util/pipe.o 00:02:25.053 CC lib/util/strerror_tls.o 00:02:25.053 CC lib/util/string.o 00:02:25.053 CC lib/util/uuid.o 00:02:25.053 CC lib/util/xor.o 00:02:25.053 CC lib/util/zipf.o 00:02:25.053 CC lib/vfio_user/host/vfio_user_pci.o 00:02:25.053 CC lib/vfio_user/host/vfio_user.o 00:02:25.053 LIB libspdk_dma.a 00:02:25.053 SO libspdk_dma.so.4.0 00:02:25.053 LIB libspdk_ioat.a 00:02:25.053 SYMLINK libspdk_dma.so 00:02:25.053 SO libspdk_ioat.so.7.0 00:02:25.053 SYMLINK libspdk_ioat.so 00:02:25.053 LIB libspdk_vfio_user.a 00:02:25.053 SO libspdk_vfio_user.so.5.0 00:02:25.053 LIB libspdk_util.a 00:02:25.053 SYMLINK libspdk_vfio_user.so 00:02:25.053 SO libspdk_util.so.10.0 00:02:25.312 SYMLINK libspdk_util.so 00:02:25.312 LIB libspdk_trace_parser.a 00:02:25.312 SO libspdk_trace_parser.so.5.0 00:02:25.312 SYMLINK libspdk_trace_parser.so 00:02:25.569 CC lib/rdma_provider/common.o 00:02:25.569 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:25.569 CC lib/conf/conf.o 00:02:25.569 CC lib/env_dpdk/env.o 00:02:25.569 CC lib/env_dpdk/memory.o 00:02:25.569 CC lib/env_dpdk/pci.o 00:02:25.569 CC lib/env_dpdk/init.o 00:02:25.569 CC lib/vmd/led.o 00:02:25.569 CC lib/vmd/vmd.o 00:02:25.569 CC lib/env_dpdk/threads.o 00:02:25.569 CC lib/env_dpdk/pci_ioat.o 00:02:25.569 CC lib/env_dpdk/pci_vmd.o 00:02:25.569 CC lib/env_dpdk/pci_virtio.o 00:02:25.569 CC lib/env_dpdk/pci_idxd.o 00:02:25.569 CC lib/env_dpdk/sigbus_handler.o 00:02:25.569 CC lib/env_dpdk/pci_event.o 00:02:25.569 CC lib/env_dpdk/pci_dpdk.o 00:02:25.569 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:25.569 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:25.569 CC lib/json/json_parse.o 00:02:25.569 CC lib/rdma_utils/rdma_utils.o 00:02:25.569 CC lib/json/json_util.o 00:02:25.569 CC lib/idxd/idxd.o 00:02:25.569 CC lib/idxd/idxd_kernel.o 00:02:25.569 CC lib/json/json_write.o 00:02:25.569 CC lib/idxd/idxd_user.o 00:02:25.826 LIB libspdk_rdma_provider.a 00:02:25.826 LIB libspdk_conf.a 00:02:25.826 SO libspdk_rdma_provider.so.6.0 00:02:25.826 SO libspdk_conf.so.6.0 00:02:25.826 LIB libspdk_rdma_utils.a 00:02:25.826 LIB libspdk_json.a 00:02:25.826 SYMLINK libspdk_conf.so 00:02:25.826 SO libspdk_rdma_utils.so.1.0 00:02:25.826 SYMLINK libspdk_rdma_provider.so 00:02:25.826 SO libspdk_json.so.6.0 00:02:25.826 SYMLINK libspdk_rdma_utils.so 00:02:25.826 SYMLINK libspdk_json.so 00:02:26.083 LIB libspdk_idxd.a 00:02:26.083 SO libspdk_idxd.so.12.0 00:02:26.083 LIB libspdk_vmd.a 00:02:26.083 SO libspdk_vmd.so.6.0 00:02:26.083 SYMLINK libspdk_idxd.so 00:02:26.083 SYMLINK libspdk_vmd.so 00:02:26.083 CC lib/jsonrpc/jsonrpc_server.o 00:02:26.083 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:26.083 CC lib/jsonrpc/jsonrpc_client.o 00:02:26.083 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:26.341 LIB libspdk_jsonrpc.a 00:02:26.341 SO libspdk_jsonrpc.so.6.0 00:02:26.599 SYMLINK libspdk_jsonrpc.so 00:02:26.599 LIB libspdk_env_dpdk.a 00:02:26.599 SO libspdk_env_dpdk.so.15.0 00:02:26.857 SYMLINK libspdk_env_dpdk.so 00:02:26.857 CC lib/rpc/rpc.o 00:02:26.857 LIB libspdk_rpc.a 00:02:27.115 SO libspdk_rpc.so.6.0 00:02:27.115 SYMLINK libspdk_rpc.so 00:02:27.371 CC lib/notify/notify_rpc.o 00:02:27.371 CC lib/notify/notify.o 00:02:27.371 CC lib/keyring/keyring.o 00:02:27.371 CC lib/keyring/keyring_rpc.o 00:02:27.371 CC lib/trace/trace.o 00:02:27.371 CC lib/trace/trace_flags.o 00:02:27.371 CC lib/trace/trace_rpc.o 00:02:27.371 LIB libspdk_notify.a 00:02:27.628 SO libspdk_notify.so.6.0 00:02:27.628 LIB libspdk_keyring.a 00:02:27.628 LIB libspdk_trace.a 00:02:27.628 SYMLINK libspdk_notify.so 00:02:27.628 SO libspdk_keyring.so.1.0 00:02:27.628 SO libspdk_trace.so.10.0 00:02:27.628 SYMLINK libspdk_keyring.so 00:02:27.628 SYMLINK libspdk_trace.so 00:02:27.886 CC lib/thread/thread.o 00:02:27.886 CC lib/thread/iobuf.o 00:02:27.886 CC lib/sock/sock.o 00:02:27.886 CC lib/sock/sock_rpc.o 00:02:28.453 LIB libspdk_sock.a 00:02:28.453 SO libspdk_sock.so.10.0 00:02:28.453 SYMLINK libspdk_sock.so 00:02:28.712 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:28.712 CC lib/nvme/nvme_ctrlr.o 00:02:28.712 CC lib/nvme/nvme_fabric.o 00:02:28.712 CC lib/nvme/nvme_ns_cmd.o 00:02:28.712 CC lib/nvme/nvme_ns.o 00:02:28.712 CC lib/nvme/nvme_pcie_common.o 00:02:28.712 CC lib/nvme/nvme_pcie.o 00:02:28.712 CC lib/nvme/nvme_qpair.o 00:02:28.712 CC lib/nvme/nvme.o 00:02:28.712 CC lib/nvme/nvme_quirks.o 00:02:28.712 CC lib/nvme/nvme_transport.o 00:02:28.712 CC lib/nvme/nvme_discovery.o 00:02:28.712 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:28.712 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:28.712 CC lib/nvme/nvme_tcp.o 00:02:28.712 CC lib/nvme/nvme_opal.o 00:02:28.712 CC lib/nvme/nvme_io_msg.o 00:02:28.712 CC lib/nvme/nvme_poll_group.o 00:02:28.712 CC lib/nvme/nvme_zns.o 00:02:28.712 CC lib/nvme/nvme_stubs.o 00:02:28.712 CC lib/nvme/nvme_auth.o 00:02:28.712 CC lib/nvme/nvme_cuse.o 00:02:28.712 CC lib/nvme/nvme_rdma.o 00:02:28.970 LIB libspdk_thread.a 00:02:28.970 SO libspdk_thread.so.10.1 00:02:28.970 SYMLINK libspdk_thread.so 00:02:29.229 CC lib/init/subsystem_rpc.o 00:02:29.229 CC lib/init/json_config.o 00:02:29.229 CC lib/init/subsystem.o 00:02:29.229 CC lib/init/rpc.o 00:02:29.229 CC lib/blob/blobstore.o 00:02:29.229 CC lib/blob/zeroes.o 00:02:29.229 CC lib/blob/request.o 00:02:29.229 CC lib/blob/blob_bs_dev.o 00:02:29.487 CC lib/virtio/virtio.o 00:02:29.487 CC lib/virtio/virtio_vfio_user.o 00:02:29.487 CC lib/virtio/virtio_vhost_user.o 00:02:29.487 CC lib/virtio/virtio_pci.o 00:02:29.487 CC lib/accel/accel.o 00:02:29.487 CC lib/accel/accel_rpc.o 00:02:29.487 CC lib/accel/accel_sw.o 00:02:29.487 LIB libspdk_init.a 00:02:29.487 SO libspdk_init.so.5.0 00:02:29.745 LIB libspdk_virtio.a 00:02:29.745 SYMLINK libspdk_init.so 00:02:29.745 SO libspdk_virtio.so.7.0 00:02:29.745 SYMLINK libspdk_virtio.so 00:02:30.003 CC lib/event/app.o 00:02:30.003 CC lib/event/reactor.o 00:02:30.003 CC lib/event/log_rpc.o 00:02:30.003 CC lib/event/app_rpc.o 00:02:30.003 CC lib/event/scheduler_static.o 00:02:30.003 LIB libspdk_accel.a 00:02:30.003 SO libspdk_accel.so.16.0 00:02:30.261 SYMLINK libspdk_accel.so 00:02:30.261 LIB libspdk_event.a 00:02:30.261 LIB libspdk_nvme.a 00:02:30.261 SO libspdk_event.so.14.0 00:02:30.261 SO libspdk_nvme.so.13.1 00:02:30.520 SYMLINK libspdk_event.so 00:02:30.520 CC lib/bdev/bdev.o 00:02:30.520 CC lib/bdev/bdev_rpc.o 00:02:30.520 CC lib/bdev/bdev_zone.o 00:02:30.520 CC lib/bdev/part.o 00:02:30.520 CC lib/bdev/scsi_nvme.o 00:02:30.520 SYMLINK libspdk_nvme.so 00:02:31.522 LIB libspdk_blob.a 00:02:31.522 SO libspdk_blob.so.11.0 00:02:31.522 SYMLINK libspdk_blob.so 00:02:31.781 CC lib/lvol/lvol.o 00:02:31.781 CC lib/blobfs/blobfs.o 00:02:31.781 CC lib/blobfs/tree.o 00:02:32.348 LIB libspdk_bdev.a 00:02:32.348 SO libspdk_bdev.so.16.0 00:02:32.348 SYMLINK libspdk_bdev.so 00:02:32.348 LIB libspdk_blobfs.a 00:02:32.348 SO libspdk_blobfs.so.10.0 00:02:32.348 LIB libspdk_lvol.a 00:02:32.607 SO libspdk_lvol.so.10.0 00:02:32.607 SYMLINK libspdk_blobfs.so 00:02:32.607 SYMLINK libspdk_lvol.so 00:02:32.607 CC lib/nbd/nbd.o 00:02:32.607 CC lib/nbd/nbd_rpc.o 00:02:32.607 CC lib/ftl/ftl_core.o 00:02:32.607 CC lib/ftl/ftl_init.o 00:02:32.607 CC lib/ftl/ftl_layout.o 00:02:32.607 CC lib/ftl/ftl_io.o 00:02:32.607 CC lib/ftl/ftl_debug.o 00:02:32.607 CC lib/ftl/ftl_sb.o 00:02:32.607 CC lib/nvmf/ctrlr.o 00:02:32.607 CC lib/ftl/ftl_l2p.o 00:02:32.607 CC lib/nvmf/ctrlr_discovery.o 00:02:32.607 CC lib/ftl/ftl_l2p_flat.o 00:02:32.607 CC lib/ftl/ftl_band.o 00:02:32.607 CC lib/nvmf/ctrlr_bdev.o 00:02:32.607 CC lib/ftl/ftl_nv_cache.o 00:02:32.607 CC lib/scsi/dev.o 00:02:32.607 CC lib/nvmf/subsystem.o 00:02:32.607 CC lib/ftl/ftl_writer.o 00:02:32.607 CC lib/nvmf/nvmf.o 00:02:32.607 CC lib/ftl/ftl_band_ops.o 00:02:32.607 CC lib/nvmf/nvmf_rpc.o 00:02:32.607 CC lib/scsi/lun.o 00:02:32.607 CC lib/nvmf/tcp.o 00:02:32.607 CC lib/scsi/port.o 00:02:32.607 CC lib/nvmf/transport.o 00:02:32.607 CC lib/ftl/ftl_rq.o 00:02:32.607 CC lib/ftl/ftl_reloc.o 00:02:32.607 CC lib/ftl/ftl_l2p_cache.o 00:02:32.607 CC lib/scsi/scsi_pr.o 00:02:32.607 CC lib/nvmf/stubs.o 00:02:32.607 CC lib/scsi/scsi.o 00:02:32.607 CC lib/ftl/mngt/ftl_mngt.o 00:02:32.607 CC lib/ftl/ftl_p2l.o 00:02:32.607 CC lib/scsi/scsi_rpc.o 00:02:32.607 CC lib/nvmf/mdns_server.o 00:02:32.607 CC lib/scsi/scsi_bdev.o 00:02:32.607 CC lib/nvmf/rdma.o 00:02:32.607 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:32.607 CC lib/ublk/ublk_rpc.o 00:02:32.607 CC lib/nvmf/auth.o 00:02:32.607 CC lib/scsi/task.o 00:02:32.607 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:32.607 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:32.607 CC lib/ublk/ublk.o 00:02:32.607 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:32.607 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:32.607 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:32.607 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:32.607 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:32.607 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:32.607 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:32.607 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:32.607 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:32.607 CC lib/ftl/utils/ftl_conf.o 00:02:32.607 CC lib/ftl/utils/ftl_md.o 00:02:32.607 CC lib/ftl/utils/ftl_mempool.o 00:02:32.607 CC lib/ftl/utils/ftl_bitmap.o 00:02:32.607 CC lib/ftl/utils/ftl_property.o 00:02:32.607 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:32.607 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:32.607 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:32.607 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:32.607 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:32.607 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:32.607 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:32.607 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:32.607 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:32.607 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:32.607 CC lib/ftl/base/ftl_base_dev.o 00:02:32.607 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:32.607 CC lib/ftl/base/ftl_base_bdev.o 00:02:32.607 CC lib/ftl/ftl_trace.o 00:02:33.171 LIB libspdk_nbd.a 00:02:33.171 SO libspdk_nbd.so.7.0 00:02:33.171 SYMLINK libspdk_nbd.so 00:02:33.429 LIB libspdk_scsi.a 00:02:33.429 SO libspdk_scsi.so.9.0 00:02:33.429 LIB libspdk_ublk.a 00:02:33.429 SO libspdk_ublk.so.3.0 00:02:33.429 SYMLINK libspdk_scsi.so 00:02:33.429 SYMLINK libspdk_ublk.so 00:02:33.429 LIB libspdk_ftl.a 00:02:33.687 SO libspdk_ftl.so.9.0 00:02:33.687 CC lib/iscsi/conn.o 00:02:33.687 CC lib/iscsi/init_grp.o 00:02:33.687 CC lib/iscsi/md5.o 00:02:33.687 CC lib/iscsi/param.o 00:02:33.687 CC lib/iscsi/iscsi.o 00:02:33.687 CC lib/iscsi/portal_grp.o 00:02:33.687 CC lib/iscsi/tgt_node.o 00:02:33.687 CC lib/iscsi/iscsi_subsystem.o 00:02:33.687 CC lib/iscsi/iscsi_rpc.o 00:02:33.687 CC lib/iscsi/task.o 00:02:33.687 CC lib/vhost/vhost.o 00:02:33.687 CC lib/vhost/vhost_rpc.o 00:02:33.687 CC lib/vhost/vhost_scsi.o 00:02:33.687 CC lib/vhost/vhost_blk.o 00:02:33.687 CC lib/vhost/rte_vhost_user.o 00:02:33.945 SYMLINK libspdk_ftl.so 00:02:34.511 LIB libspdk_nvmf.a 00:02:34.511 SO libspdk_nvmf.so.19.0 00:02:34.511 LIB libspdk_vhost.a 00:02:34.511 SO libspdk_vhost.so.8.0 00:02:34.511 SYMLINK libspdk_nvmf.so 00:02:34.511 SYMLINK libspdk_vhost.so 00:02:34.770 LIB libspdk_iscsi.a 00:02:34.770 SO libspdk_iscsi.so.8.0 00:02:34.770 SYMLINK libspdk_iscsi.so 00:02:35.335 CC module/env_dpdk/env_dpdk_rpc.o 00:02:35.335 CC module/sock/posix/posix.o 00:02:35.335 CC module/blob/bdev/blob_bdev.o 00:02:35.335 CC module/keyring/file/keyring.o 00:02:35.335 CC module/scheduler/gscheduler/gscheduler.o 00:02:35.335 CC module/keyring/file/keyring_rpc.o 00:02:35.335 LIB libspdk_env_dpdk_rpc.a 00:02:35.335 CC module/accel/error/accel_error.o 00:02:35.335 CC module/accel/error/accel_error_rpc.o 00:02:35.335 CC module/accel/iaa/accel_iaa_rpc.o 00:02:35.335 CC module/keyring/linux/keyring.o 00:02:35.335 CC module/accel/iaa/accel_iaa.o 00:02:35.335 CC module/keyring/linux/keyring_rpc.o 00:02:35.335 CC module/accel/ioat/accel_ioat.o 00:02:35.335 CC module/accel/ioat/accel_ioat_rpc.o 00:02:35.593 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:35.593 CC module/accel/dsa/accel_dsa.o 00:02:35.593 CC module/accel/dsa/accel_dsa_rpc.o 00:02:35.593 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:35.593 SO libspdk_env_dpdk_rpc.so.6.0 00:02:35.593 SYMLINK libspdk_env_dpdk_rpc.so 00:02:35.593 LIB libspdk_keyring_file.a 00:02:35.593 LIB libspdk_accel_error.a 00:02:35.593 LIB libspdk_keyring_linux.a 00:02:35.593 LIB libspdk_scheduler_gscheduler.a 00:02:35.593 LIB libspdk_scheduler_dpdk_governor.a 00:02:35.593 SO libspdk_accel_error.so.2.0 00:02:35.593 LIB libspdk_accel_ioat.a 00:02:35.593 LIB libspdk_scheduler_dynamic.a 00:02:35.593 SO libspdk_scheduler_gscheduler.so.4.0 00:02:35.593 SO libspdk_keyring_linux.so.1.0 00:02:35.593 SO libspdk_keyring_file.so.1.0 00:02:35.593 LIB libspdk_accel_iaa.a 00:02:35.593 LIB libspdk_blob_bdev.a 00:02:35.593 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:35.593 SO libspdk_accel_ioat.so.6.0 00:02:35.593 SO libspdk_scheduler_dynamic.so.4.0 00:02:35.593 SO libspdk_accel_iaa.so.3.0 00:02:35.593 SO libspdk_blob_bdev.so.11.0 00:02:35.593 SYMLINK libspdk_accel_error.so 00:02:35.593 LIB libspdk_accel_dsa.a 00:02:35.593 SYMLINK libspdk_scheduler_gscheduler.so 00:02:35.593 SYMLINK libspdk_keyring_linux.so 00:02:35.593 SYMLINK libspdk_keyring_file.so 00:02:35.593 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:35.851 SYMLINK libspdk_scheduler_dynamic.so 00:02:35.851 SO libspdk_accel_dsa.so.5.0 00:02:35.851 SYMLINK libspdk_blob_bdev.so 00:02:35.851 SYMLINK libspdk_accel_ioat.so 00:02:35.851 SYMLINK libspdk_accel_iaa.so 00:02:35.851 SYMLINK libspdk_accel_dsa.so 00:02:36.110 LIB libspdk_sock_posix.a 00:02:36.110 SO libspdk_sock_posix.so.6.0 00:02:36.110 SYMLINK libspdk_sock_posix.so 00:02:36.110 CC module/blobfs/bdev/blobfs_bdev.o 00:02:36.110 CC module/bdev/error/vbdev_error.o 00:02:36.110 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:36.110 CC module/bdev/lvol/vbdev_lvol.o 00:02:36.110 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:36.110 CC module/bdev/error/vbdev_error_rpc.o 00:02:36.110 CC module/bdev/delay/vbdev_delay.o 00:02:36.110 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:36.110 CC module/bdev/null/bdev_null.o 00:02:36.110 CC module/bdev/null/bdev_null_rpc.o 00:02:36.110 CC module/bdev/iscsi/bdev_iscsi.o 00:02:36.110 CC module/bdev/aio/bdev_aio.o 00:02:36.110 CC module/bdev/aio/bdev_aio_rpc.o 00:02:36.110 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:36.110 CC module/bdev/nvme/bdev_nvme.o 00:02:36.110 CC module/bdev/raid/bdev_raid.o 00:02:36.110 CC module/bdev/raid/bdev_raid_rpc.o 00:02:36.110 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:36.110 CC module/bdev/raid/bdev_raid_sb.o 00:02:36.110 CC module/bdev/nvme/nvme_rpc.o 00:02:36.110 CC module/bdev/nvme/bdev_mdns_client.o 00:02:36.110 CC module/bdev/raid/raid0.o 00:02:36.110 CC module/bdev/raid/concat.o 00:02:36.110 CC module/bdev/nvme/vbdev_opal.o 00:02:36.110 CC module/bdev/raid/raid1.o 00:02:36.110 CC module/bdev/ftl/bdev_ftl.o 00:02:36.110 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:36.110 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:36.110 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:36.110 CC module/bdev/split/vbdev_split.o 00:02:36.110 CC module/bdev/gpt/gpt.o 00:02:36.110 CC module/bdev/split/vbdev_split_rpc.o 00:02:36.110 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:36.110 CC module/bdev/gpt/vbdev_gpt.o 00:02:36.110 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:36.110 CC module/bdev/passthru/vbdev_passthru.o 00:02:36.110 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:36.110 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:36.110 CC module/bdev/malloc/bdev_malloc.o 00:02:36.110 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:36.110 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:36.110 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:36.371 LIB libspdk_blobfs_bdev.a 00:02:36.371 SO libspdk_blobfs_bdev.so.6.0 00:02:36.371 LIB libspdk_bdev_error.a 00:02:36.371 LIB libspdk_bdev_split.a 00:02:36.371 LIB libspdk_bdev_null.a 00:02:36.371 SYMLINK libspdk_blobfs_bdev.so 00:02:36.371 SO libspdk_bdev_error.so.6.0 00:02:36.371 LIB libspdk_bdev_ftl.a 00:02:36.629 SO libspdk_bdev_split.so.6.0 00:02:36.629 SO libspdk_bdev_null.so.6.0 00:02:36.629 SO libspdk_bdev_ftl.so.6.0 00:02:36.629 SYMLINK libspdk_bdev_error.so 00:02:36.629 LIB libspdk_bdev_aio.a 00:02:36.629 LIB libspdk_bdev_gpt.a 00:02:36.629 LIB libspdk_bdev_passthru.a 00:02:36.629 SYMLINK libspdk_bdev_null.so 00:02:36.629 LIB libspdk_bdev_iscsi.a 00:02:36.629 SYMLINK libspdk_bdev_split.so 00:02:36.629 LIB libspdk_bdev_malloc.a 00:02:36.629 SO libspdk_bdev_aio.so.6.0 00:02:36.629 LIB libspdk_bdev_delay.a 00:02:36.629 SO libspdk_bdev_iscsi.so.6.0 00:02:36.629 LIB libspdk_bdev_zone_block.a 00:02:36.629 SO libspdk_bdev_gpt.so.6.0 00:02:36.629 SYMLINK libspdk_bdev_ftl.so 00:02:36.629 SO libspdk_bdev_passthru.so.6.0 00:02:36.629 SO libspdk_bdev_delay.so.6.0 00:02:36.629 SO libspdk_bdev_malloc.so.6.0 00:02:36.629 SO libspdk_bdev_zone_block.so.6.0 00:02:36.629 SYMLINK libspdk_bdev_iscsi.so 00:02:36.629 SYMLINK libspdk_bdev_aio.so 00:02:36.629 SYMLINK libspdk_bdev_passthru.so 00:02:36.629 SYMLINK libspdk_bdev_gpt.so 00:02:36.629 LIB libspdk_bdev_lvol.a 00:02:36.629 SYMLINK libspdk_bdev_delay.so 00:02:36.629 SYMLINK libspdk_bdev_zone_block.so 00:02:36.629 SYMLINK libspdk_bdev_malloc.so 00:02:36.629 SO libspdk_bdev_lvol.so.6.0 00:02:36.629 LIB libspdk_bdev_virtio.a 00:02:36.629 SYMLINK libspdk_bdev_lvol.so 00:02:36.888 SO libspdk_bdev_virtio.so.6.0 00:02:36.888 SYMLINK libspdk_bdev_virtio.so 00:02:36.888 LIB libspdk_bdev_raid.a 00:02:37.147 SO libspdk_bdev_raid.so.6.0 00:02:37.147 SYMLINK libspdk_bdev_raid.so 00:02:37.714 LIB libspdk_bdev_nvme.a 00:02:37.974 SO libspdk_bdev_nvme.so.7.0 00:02:37.974 SYMLINK libspdk_bdev_nvme.so 00:02:38.540 CC module/event/subsystems/iobuf/iobuf.o 00:02:38.540 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:38.540 CC module/event/subsystems/scheduler/scheduler.o 00:02:38.540 CC module/event/subsystems/vmd/vmd.o 00:02:38.540 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:38.540 CC module/event/subsystems/keyring/keyring.o 00:02:38.540 CC module/event/subsystems/sock/sock.o 00:02:38.540 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:38.799 LIB libspdk_event_vmd.a 00:02:38.799 LIB libspdk_event_iobuf.a 00:02:38.799 LIB libspdk_event_keyring.a 00:02:38.799 LIB libspdk_event_scheduler.a 00:02:38.799 LIB libspdk_event_vhost_blk.a 00:02:38.799 LIB libspdk_event_sock.a 00:02:38.799 SO libspdk_event_iobuf.so.3.0 00:02:38.799 SO libspdk_event_vmd.so.6.0 00:02:38.799 SO libspdk_event_keyring.so.1.0 00:02:38.799 SO libspdk_event_scheduler.so.4.0 00:02:38.799 SO libspdk_event_vhost_blk.so.3.0 00:02:38.799 SO libspdk_event_sock.so.5.0 00:02:38.799 SYMLINK libspdk_event_iobuf.so 00:02:38.799 SYMLINK libspdk_event_vmd.so 00:02:38.799 SYMLINK libspdk_event_scheduler.so 00:02:38.799 SYMLINK libspdk_event_vhost_blk.so 00:02:38.799 SYMLINK libspdk_event_keyring.so 00:02:38.799 SYMLINK libspdk_event_sock.so 00:02:39.058 CC module/event/subsystems/accel/accel.o 00:02:39.317 LIB libspdk_event_accel.a 00:02:39.317 SO libspdk_event_accel.so.6.0 00:02:39.317 SYMLINK libspdk_event_accel.so 00:02:39.575 CC module/event/subsystems/bdev/bdev.o 00:02:39.833 LIB libspdk_event_bdev.a 00:02:39.833 SO libspdk_event_bdev.so.6.0 00:02:39.833 SYMLINK libspdk_event_bdev.so 00:02:40.092 CC module/event/subsystems/nbd/nbd.o 00:02:40.092 CC module/event/subsystems/ublk/ublk.o 00:02:40.092 CC module/event/subsystems/scsi/scsi.o 00:02:40.092 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:40.092 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:40.351 LIB libspdk_event_nbd.a 00:02:40.351 SO libspdk_event_nbd.so.6.0 00:02:40.351 LIB libspdk_event_ublk.a 00:02:40.351 LIB libspdk_event_scsi.a 00:02:40.351 SO libspdk_event_ublk.so.3.0 00:02:40.351 SYMLINK libspdk_event_nbd.so 00:02:40.351 SO libspdk_event_scsi.so.6.0 00:02:40.351 LIB libspdk_event_nvmf.a 00:02:40.351 SYMLINK libspdk_event_ublk.so 00:02:40.351 SO libspdk_event_nvmf.so.6.0 00:02:40.351 SYMLINK libspdk_event_scsi.so 00:02:40.351 SYMLINK libspdk_event_nvmf.so 00:02:40.609 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:40.609 CC module/event/subsystems/iscsi/iscsi.o 00:02:40.868 LIB libspdk_event_vhost_scsi.a 00:02:40.868 SO libspdk_event_vhost_scsi.so.3.0 00:02:40.868 LIB libspdk_event_iscsi.a 00:02:40.868 SO libspdk_event_iscsi.so.6.0 00:02:40.868 SYMLINK libspdk_event_vhost_scsi.so 00:02:40.868 SYMLINK libspdk_event_iscsi.so 00:02:41.127 SO libspdk.so.6.0 00:02:41.127 SYMLINK libspdk.so 00:02:41.385 CC app/spdk_top/spdk_top.o 00:02:41.385 CC test/rpc_client/rpc_client_test.o 00:02:41.385 TEST_HEADER include/spdk/accel.h 00:02:41.385 CXX app/trace/trace.o 00:02:41.385 TEST_HEADER include/spdk/assert.h 00:02:41.385 TEST_HEADER include/spdk/base64.h 00:02:41.385 TEST_HEADER include/spdk/accel_module.h 00:02:41.385 TEST_HEADER include/spdk/bdev.h 00:02:41.385 TEST_HEADER include/spdk/barrier.h 00:02:41.385 TEST_HEADER include/spdk/bdev_module.h 00:02:41.385 CC app/spdk_lspci/spdk_lspci.o 00:02:41.385 TEST_HEADER include/spdk/bit_pool.h 00:02:41.385 TEST_HEADER include/spdk/bit_array.h 00:02:41.385 TEST_HEADER include/spdk/bdev_zone.h 00:02:41.385 TEST_HEADER include/spdk/blob_bdev.h 00:02:41.385 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:41.385 CC app/spdk_nvme_identify/identify.o 00:02:41.385 CC app/trace_record/trace_record.o 00:02:41.385 TEST_HEADER include/spdk/blobfs.h 00:02:41.385 TEST_HEADER include/spdk/conf.h 00:02:41.385 TEST_HEADER include/spdk/blob.h 00:02:41.385 TEST_HEADER include/spdk/config.h 00:02:41.385 TEST_HEADER include/spdk/crc16.h 00:02:41.385 TEST_HEADER include/spdk/cpuset.h 00:02:41.385 CC app/spdk_nvme_perf/perf.o 00:02:41.385 TEST_HEADER include/spdk/crc32.h 00:02:41.385 CC app/spdk_nvme_discover/discovery_aer.o 00:02:41.385 TEST_HEADER include/spdk/dif.h 00:02:41.385 TEST_HEADER include/spdk/crc64.h 00:02:41.385 TEST_HEADER include/spdk/dma.h 00:02:41.385 TEST_HEADER include/spdk/endian.h 00:02:41.385 TEST_HEADER include/spdk/env.h 00:02:41.385 TEST_HEADER include/spdk/event.h 00:02:41.385 TEST_HEADER include/spdk/env_dpdk.h 00:02:41.385 TEST_HEADER include/spdk/fd.h 00:02:41.385 TEST_HEADER include/spdk/fd_group.h 00:02:41.385 TEST_HEADER include/spdk/file.h 00:02:41.385 TEST_HEADER include/spdk/ftl.h 00:02:41.385 TEST_HEADER include/spdk/gpt_spec.h 00:02:41.385 TEST_HEADER include/spdk/hexlify.h 00:02:41.385 TEST_HEADER include/spdk/histogram_data.h 00:02:41.385 TEST_HEADER include/spdk/idxd.h 00:02:41.385 TEST_HEADER include/spdk/idxd_spec.h 00:02:41.385 TEST_HEADER include/spdk/init.h 00:02:41.385 TEST_HEADER include/spdk/ioat.h 00:02:41.385 TEST_HEADER include/spdk/ioat_spec.h 00:02:41.385 TEST_HEADER include/spdk/iscsi_spec.h 00:02:41.385 TEST_HEADER include/spdk/jsonrpc.h 00:02:41.385 TEST_HEADER include/spdk/keyring.h 00:02:41.385 TEST_HEADER include/spdk/json.h 00:02:41.385 TEST_HEADER include/spdk/keyring_module.h 00:02:41.385 TEST_HEADER include/spdk/likely.h 00:02:41.385 TEST_HEADER include/spdk/lvol.h 00:02:41.385 TEST_HEADER include/spdk/memory.h 00:02:41.385 TEST_HEADER include/spdk/log.h 00:02:41.385 TEST_HEADER include/spdk/mmio.h 00:02:41.385 TEST_HEADER include/spdk/net.h 00:02:41.385 TEST_HEADER include/spdk/nbd.h 00:02:41.385 TEST_HEADER include/spdk/notify.h 00:02:41.385 TEST_HEADER include/spdk/nvme.h 00:02:41.385 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:41.385 CC app/nvmf_tgt/nvmf_main.o 00:02:41.385 TEST_HEADER include/spdk/nvme_intel.h 00:02:41.385 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:41.385 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:41.385 TEST_HEADER include/spdk/nvme_zns.h 00:02:41.385 TEST_HEADER include/spdk/nvme_spec.h 00:02:41.385 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:41.385 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:41.385 TEST_HEADER include/spdk/nvmf.h 00:02:41.385 TEST_HEADER include/spdk/nvmf_transport.h 00:02:41.385 TEST_HEADER include/spdk/opal_spec.h 00:02:41.385 TEST_HEADER include/spdk/opal.h 00:02:41.385 TEST_HEADER include/spdk/nvmf_spec.h 00:02:41.385 CC app/spdk_dd/spdk_dd.o 00:02:41.385 TEST_HEADER include/spdk/pci_ids.h 00:02:41.385 TEST_HEADER include/spdk/pipe.h 00:02:41.385 TEST_HEADER include/spdk/queue.h 00:02:41.385 TEST_HEADER include/spdk/reduce.h 00:02:41.385 TEST_HEADER include/spdk/rpc.h 00:02:41.385 TEST_HEADER include/spdk/scheduler.h 00:02:41.385 TEST_HEADER include/spdk/scsi.h 00:02:41.385 TEST_HEADER include/spdk/scsi_spec.h 00:02:41.385 TEST_HEADER include/spdk/sock.h 00:02:41.385 TEST_HEADER include/spdk/string.h 00:02:41.385 TEST_HEADER include/spdk/stdinc.h 00:02:41.385 TEST_HEADER include/spdk/thread.h 00:02:41.656 TEST_HEADER include/spdk/trace.h 00:02:41.656 TEST_HEADER include/spdk/trace_parser.h 00:02:41.656 TEST_HEADER include/spdk/tree.h 00:02:41.656 TEST_HEADER include/spdk/ublk.h 00:02:41.656 TEST_HEADER include/spdk/util.h 00:02:41.656 TEST_HEADER include/spdk/uuid.h 00:02:41.656 TEST_HEADER include/spdk/version.h 00:02:41.656 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:41.656 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:41.656 TEST_HEADER include/spdk/vhost.h 00:02:41.656 TEST_HEADER include/spdk/vmd.h 00:02:41.656 TEST_HEADER include/spdk/zipf.h 00:02:41.656 TEST_HEADER include/spdk/xor.h 00:02:41.656 CXX test/cpp_headers/accel.o 00:02:41.656 CXX test/cpp_headers/accel_module.o 00:02:41.656 CXX test/cpp_headers/assert.o 00:02:41.656 CXX test/cpp_headers/barrier.o 00:02:41.656 CXX test/cpp_headers/base64.o 00:02:41.656 CXX test/cpp_headers/bdev.o 00:02:41.656 CXX test/cpp_headers/bdev_zone.o 00:02:41.656 CXX test/cpp_headers/bdev_module.o 00:02:41.656 CXX test/cpp_headers/bit_array.o 00:02:41.656 CXX test/cpp_headers/bit_pool.o 00:02:41.656 CC app/iscsi_tgt/iscsi_tgt.o 00:02:41.656 CXX test/cpp_headers/blob_bdev.o 00:02:41.656 CXX test/cpp_headers/blobfs_bdev.o 00:02:41.656 CXX test/cpp_headers/blob.o 00:02:41.656 CXX test/cpp_headers/conf.o 00:02:41.656 CXX test/cpp_headers/config.o 00:02:41.656 CXX test/cpp_headers/blobfs.o 00:02:41.656 CXX test/cpp_headers/cpuset.o 00:02:41.656 CXX test/cpp_headers/crc16.o 00:02:41.656 CXX test/cpp_headers/crc32.o 00:02:41.656 CXX test/cpp_headers/dif.o 00:02:41.656 CXX test/cpp_headers/crc64.o 00:02:41.656 CXX test/cpp_headers/env_dpdk.o 00:02:41.656 CXX test/cpp_headers/dma.o 00:02:41.656 CXX test/cpp_headers/env.o 00:02:41.656 CXX test/cpp_headers/endian.o 00:02:41.656 CXX test/cpp_headers/event.o 00:02:41.656 CXX test/cpp_headers/fd_group.o 00:02:41.656 CXX test/cpp_headers/fd.o 00:02:41.656 CXX test/cpp_headers/file.o 00:02:41.656 CXX test/cpp_headers/ftl.o 00:02:41.656 CXX test/cpp_headers/gpt_spec.o 00:02:41.656 CXX test/cpp_headers/hexlify.o 00:02:41.656 CXX test/cpp_headers/idxd.o 00:02:41.656 CXX test/cpp_headers/init.o 00:02:41.656 CXX test/cpp_headers/ioat.o 00:02:41.656 CXX test/cpp_headers/histogram_data.o 00:02:41.656 CXX test/cpp_headers/idxd_spec.o 00:02:41.656 CXX test/cpp_headers/json.o 00:02:41.656 CC app/spdk_tgt/spdk_tgt.o 00:02:41.656 CXX test/cpp_headers/ioat_spec.o 00:02:41.656 CXX test/cpp_headers/jsonrpc.o 00:02:41.656 CXX test/cpp_headers/iscsi_spec.o 00:02:41.656 CXX test/cpp_headers/likely.o 00:02:41.656 CXX test/cpp_headers/keyring.o 00:02:41.656 CXX test/cpp_headers/keyring_module.o 00:02:41.656 CXX test/cpp_headers/log.o 00:02:41.656 CXX test/cpp_headers/lvol.o 00:02:41.656 CXX test/cpp_headers/nbd.o 00:02:41.656 CXX test/cpp_headers/memory.o 00:02:41.656 CXX test/cpp_headers/mmio.o 00:02:41.656 CXX test/cpp_headers/net.o 00:02:41.656 CXX test/cpp_headers/notify.o 00:02:41.656 CXX test/cpp_headers/nvme.o 00:02:41.656 CXX test/cpp_headers/nvme_ocssd.o 00:02:41.656 CXX test/cpp_headers/nvme_intel.o 00:02:41.656 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:41.656 CXX test/cpp_headers/nvme_zns.o 00:02:41.656 CXX test/cpp_headers/nvme_spec.o 00:02:41.656 CXX test/cpp_headers/nvmf_cmd.o 00:02:41.656 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:41.656 CXX test/cpp_headers/nvmf.o 00:02:41.656 CXX test/cpp_headers/nvmf_spec.o 00:02:41.656 CXX test/cpp_headers/nvmf_transport.o 00:02:41.656 CXX test/cpp_headers/opal.o 00:02:41.656 CXX test/cpp_headers/opal_spec.o 00:02:41.656 CXX test/cpp_headers/pci_ids.o 00:02:41.656 CXX test/cpp_headers/pipe.o 00:02:41.656 CXX test/cpp_headers/queue.o 00:02:41.656 CC test/env/vtophys/vtophys.o 00:02:41.656 CC test/app/histogram_perf/histogram_perf.o 00:02:41.656 CC test/env/memory/memory_ut.o 00:02:41.656 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:41.656 CC test/thread/poller_perf/poller_perf.o 00:02:41.656 CC test/app/jsoncat/jsoncat.o 00:02:41.656 CC app/fio/nvme/fio_plugin.o 00:02:41.656 CC test/app/stub/stub.o 00:02:41.656 CC test/env/pci/pci_ut.o 00:02:41.656 CC examples/util/zipf/zipf.o 00:02:41.656 CC app/fio/bdev/fio_plugin.o 00:02:41.656 CC examples/ioat/verify/verify.o 00:02:41.656 CXX test/cpp_headers/reduce.o 00:02:41.656 CC test/dma/test_dma/test_dma.o 00:02:41.656 CC test/app/bdev_svc/bdev_svc.o 00:02:41.656 CC examples/ioat/perf/perf.o 00:02:41.924 LINK rpc_client_test 00:02:41.924 LINK spdk_lspci 00:02:41.924 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:42.182 CC test/env/mem_callbacks/mem_callbacks.o 00:02:42.182 LINK nvmf_tgt 00:02:42.182 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:42.182 LINK spdk_nvme_discover 00:02:42.182 LINK poller_perf 00:02:42.182 LINK spdk_trace_record 00:02:42.182 LINK interrupt_tgt 00:02:42.182 LINK env_dpdk_post_init 00:02:42.182 CXX test/cpp_headers/rpc.o 00:02:42.182 CXX test/cpp_headers/scheduler.o 00:02:42.182 CXX test/cpp_headers/scsi.o 00:02:42.182 CXX test/cpp_headers/scsi_spec.o 00:02:42.182 CXX test/cpp_headers/sock.o 00:02:42.182 CXX test/cpp_headers/string.o 00:02:42.182 CXX test/cpp_headers/stdinc.o 00:02:42.182 CXX test/cpp_headers/thread.o 00:02:42.182 CXX test/cpp_headers/trace.o 00:02:42.182 LINK iscsi_tgt 00:02:42.182 CXX test/cpp_headers/trace_parser.o 00:02:42.182 CXX test/cpp_headers/ublk.o 00:02:42.182 CXX test/cpp_headers/tree.o 00:02:42.182 CXX test/cpp_headers/util.o 00:02:42.182 CXX test/cpp_headers/uuid.o 00:02:42.182 CXX test/cpp_headers/version.o 00:02:42.182 CXX test/cpp_headers/vfio_user_pci.o 00:02:42.182 CXX test/cpp_headers/vfio_user_spec.o 00:02:42.182 LINK vtophys 00:02:42.182 CXX test/cpp_headers/vhost.o 00:02:42.182 CXX test/cpp_headers/vmd.o 00:02:42.182 CXX test/cpp_headers/xor.o 00:02:42.182 LINK histogram_perf 00:02:42.182 CXX test/cpp_headers/zipf.o 00:02:42.182 LINK jsoncat 00:02:42.182 LINK zipf 00:02:42.182 LINK stub 00:02:42.182 LINK spdk_tgt 00:02:42.182 LINK verify 00:02:42.439 LINK bdev_svc 00:02:42.439 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:42.439 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:42.439 LINK ioat_perf 00:02:42.439 LINK mem_callbacks 00:02:42.439 LINK spdk_trace 00:02:42.439 LINK spdk_dd 00:02:42.439 LINK pci_ut 00:02:42.439 LINK test_dma 00:02:42.698 LINK nvme_fuzz 00:02:42.698 LINK spdk_bdev 00:02:42.698 LINK spdk_nvme_perf 00:02:42.698 LINK spdk_nvme 00:02:42.698 CC test/event/reactor/reactor.o 00:02:42.698 CC test/event/event_perf/event_perf.o 00:02:42.698 CC test/event/reactor_perf/reactor_perf.o 00:02:42.698 CC test/event/app_repeat/app_repeat.o 00:02:42.698 LINK spdk_top 00:02:42.698 CC examples/vmd/led/led.o 00:02:42.698 CC examples/vmd/lsvmd/lsvmd.o 00:02:42.698 LINK vhost_fuzz 00:02:42.698 CC test/event/scheduler/scheduler.o 00:02:42.698 CC examples/idxd/perf/perf.o 00:02:42.698 CC examples/sock/hello_world/hello_sock.o 00:02:42.698 CC examples/thread/thread/thread_ex.o 00:02:42.698 CC app/vhost/vhost.o 00:02:42.698 LINK memory_ut 00:02:42.698 LINK spdk_nvme_identify 00:02:42.958 LINK event_perf 00:02:42.958 LINK reactor 00:02:42.958 LINK reactor_perf 00:02:42.958 LINK led 00:02:42.958 LINK lsvmd 00:02:42.958 LINK app_repeat 00:02:42.958 LINK scheduler 00:02:42.958 LINK vhost 00:02:42.958 LINK hello_sock 00:02:42.958 CC test/nvme/boot_partition/boot_partition.o 00:02:42.958 CC test/nvme/aer/aer.o 00:02:42.958 CC test/nvme/cuse/cuse.o 00:02:42.958 CC test/nvme/compliance/nvme_compliance.o 00:02:42.958 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:42.958 CC test/nvme/simple_copy/simple_copy.o 00:02:42.958 LINK thread 00:02:42.958 CC test/nvme/startup/startup.o 00:02:42.958 CC test/nvme/overhead/overhead.o 00:02:42.958 CC test/nvme/sgl/sgl.o 00:02:42.958 CC test/nvme/reset/reset.o 00:02:42.958 CC test/nvme/connect_stress/connect_stress.o 00:02:42.958 CC test/nvme/reserve/reserve.o 00:02:42.958 CC test/nvme/fdp/fdp.o 00:02:42.958 CC test/nvme/fused_ordering/fused_ordering.o 00:02:42.958 CC test/nvme/e2edp/nvme_dp.o 00:02:42.958 CC test/nvme/err_injection/err_injection.o 00:02:42.958 LINK idxd_perf 00:02:42.958 CC test/blobfs/mkfs/mkfs.o 00:02:42.958 CC test/accel/dif/dif.o 00:02:43.216 CC test/lvol/esnap/esnap.o 00:02:43.216 LINK boot_partition 00:02:43.216 LINK connect_stress 00:02:43.216 LINK startup 00:02:43.216 LINK doorbell_aers 00:02:43.216 LINK fused_ordering 00:02:43.216 LINK err_injection 00:02:43.216 LINK reserve 00:02:43.216 LINK mkfs 00:02:43.216 LINK simple_copy 00:02:43.216 LINK aer 00:02:43.216 LINK sgl 00:02:43.216 LINK reset 00:02:43.216 LINK overhead 00:02:43.216 LINK nvme_dp 00:02:43.216 LINK fdp 00:02:43.216 LINK nvme_compliance 00:02:43.473 CC examples/nvme/hotplug/hotplug.o 00:02:43.473 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:43.473 CC examples/nvme/reconnect/reconnect.o 00:02:43.473 CC examples/nvme/arbitration/arbitration.o 00:02:43.473 CC examples/nvme/hello_world/hello_world.o 00:02:43.473 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:43.473 CC examples/nvme/abort/abort.o 00:02:43.473 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:43.473 LINK dif 00:02:43.473 CC examples/accel/perf/accel_perf.o 00:02:43.473 CC examples/blob/cli/blobcli.o 00:02:43.473 CC examples/blob/hello_world/hello_blob.o 00:02:43.473 LINK pmr_persistence 00:02:43.473 LINK hotplug 00:02:43.731 LINK cmb_copy 00:02:43.731 LINK hello_world 00:02:43.731 LINK iscsi_fuzz 00:02:43.731 LINK reconnect 00:02:43.731 LINK arbitration 00:02:43.731 LINK abort 00:02:43.731 LINK hello_blob 00:02:43.731 LINK nvme_manage 00:02:43.988 LINK accel_perf 00:02:43.988 LINK blobcli 00:02:43.988 CC test/bdev/bdevio/bdevio.o 00:02:43.988 LINK cuse 00:02:44.245 LINK bdevio 00:02:44.245 CC examples/bdev/hello_world/hello_bdev.o 00:02:44.245 CC examples/bdev/bdevperf/bdevperf.o 00:02:44.502 LINK hello_bdev 00:02:44.760 LINK bdevperf 00:02:45.326 CC examples/nvmf/nvmf/nvmf.o 00:02:45.584 LINK nvmf 00:02:46.517 LINK esnap 00:02:46.774 00:02:46.774 real 0m33.356s 00:02:46.774 user 5m8.068s 00:02:46.774 sys 2m21.935s 00:02:46.774 10:24:54 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:46.774 10:24:54 make -- common/autotest_common.sh@10 -- $ set +x 00:02:46.774 ************************************ 00:02:46.774 END TEST make 00:02:46.774 ************************************ 00:02:46.774 10:24:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:46.774 10:24:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:46.774 10:24:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:46.774 10:24:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.774 10:24:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:46.774 10:24:54 -- pm/common@44 -- $ pid=1917427 00:02:46.774 10:24:54 -- pm/common@50 -- $ kill -TERM 1917427 00:02:46.774 10:24:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.774 10:24:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:46.774 10:24:54 -- pm/common@44 -- $ pid=1917428 00:02:46.774 10:24:54 -- pm/common@50 -- $ kill -TERM 1917428 00:02:46.774 10:24:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.775 10:24:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:46.775 10:24:54 -- pm/common@44 -- $ pid=1917430 00:02:46.775 10:24:54 -- pm/common@50 -- $ kill -TERM 1917430 00:02:46.775 10:24:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.775 10:24:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:46.775 10:24:54 -- pm/common@44 -- $ pid=1917454 00:02:46.775 10:24:54 -- pm/common@50 -- $ sudo -E kill -TERM 1917454 00:02:47.033 10:24:54 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:47.033 10:24:54 -- nvmf/common.sh@7 -- # uname -s 00:02:47.033 10:24:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:47.033 10:24:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:47.033 10:24:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:47.033 10:24:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:47.033 10:24:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:47.033 10:24:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:47.033 10:24:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:47.033 10:24:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:47.033 10:24:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:47.033 10:24:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:47.033 10:24:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:02:47.033 10:24:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:02:47.033 10:24:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:47.033 10:24:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:47.033 10:24:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:47.033 10:24:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:47.033 10:24:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:47.033 10:24:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:47.033 10:24:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:47.033 10:24:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:47.033 10:24:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.033 10:24:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.033 10:24:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.033 10:24:54 -- paths/export.sh@5 -- # export PATH 00:02:47.033 10:24:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.033 10:24:54 -- nvmf/common.sh@47 -- # : 0 00:02:47.033 10:24:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:47.033 10:24:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:47.033 10:24:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:47.033 10:24:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:47.033 10:24:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:47.033 10:24:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:47.033 10:24:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:47.033 10:24:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:47.033 10:24:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:47.033 10:24:54 -- spdk/autotest.sh@32 -- # uname -s 00:02:47.033 10:24:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:47.033 10:24:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:47.033 10:24:54 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:47.033 10:24:54 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:47.033 10:24:54 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:47.033 10:24:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:47.033 10:24:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:47.033 10:24:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:47.033 10:24:54 -- spdk/autotest.sh@48 -- # udevadm_pid=1989987 00:02:47.033 10:24:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:47.033 10:24:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:47.033 10:24:54 -- pm/common@17 -- # local monitor 00:02:47.033 10:24:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.033 10:24:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.033 10:24:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.033 10:24:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.033 10:24:54 -- pm/common@21 -- # date +%s 00:02:47.033 10:24:54 -- pm/common@25 -- # sleep 1 00:02:47.033 10:24:54 -- pm/common@21 -- # date +%s 00:02:47.033 10:24:54 -- pm/common@21 -- # date +%s 00:02:47.033 10:24:54 -- pm/common@21 -- # date +%s 00:02:47.033 10:24:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721809494 00:02:47.033 10:24:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721809494 00:02:47.033 10:24:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721809494 00:02:47.033 10:24:54 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721809494 00:02:47.033 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721809494_collect-vmstat.pm.log 00:02:47.033 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721809494_collect-cpu-load.pm.log 00:02:47.033 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721809494_collect-cpu-temp.pm.log 00:02:47.033 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721809494_collect-bmc-pm.bmc.pm.log 00:02:47.965 10:24:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:47.965 10:24:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:47.965 10:24:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:47.965 10:24:55 -- common/autotest_common.sh@10 -- # set +x 00:02:47.965 10:24:55 -- spdk/autotest.sh@59 -- # create_test_list 00:02:47.965 10:24:55 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:47.965 10:24:55 -- common/autotest_common.sh@10 -- # set +x 00:02:47.965 10:24:55 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:47.965 10:24:55 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:47.965 10:24:55 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:47.965 10:24:55 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:47.965 10:24:55 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:47.965 10:24:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:47.965 10:24:55 -- common/autotest_common.sh@1455 -- # uname 00:02:47.965 10:24:55 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:47.965 10:24:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:47.965 10:24:55 -- common/autotest_common.sh@1475 -- # uname 00:02:47.965 10:24:55 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:47.965 10:24:55 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:47.965 10:24:55 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:47.965 10:24:55 -- spdk/autotest.sh@72 -- # hash lcov 00:02:47.966 10:24:55 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:47.966 10:24:55 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:47.966 --rc lcov_branch_coverage=1 00:02:47.966 --rc lcov_function_coverage=1 00:02:47.966 --rc genhtml_branch_coverage=1 00:02:47.966 --rc genhtml_function_coverage=1 00:02:47.966 --rc genhtml_legend=1 00:02:47.966 --rc geninfo_all_blocks=1 00:02:47.966 ' 00:02:47.966 10:24:55 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:47.966 --rc lcov_branch_coverage=1 00:02:47.966 --rc lcov_function_coverage=1 00:02:47.966 --rc genhtml_branch_coverage=1 00:02:47.966 --rc genhtml_function_coverage=1 00:02:47.966 --rc genhtml_legend=1 00:02:47.966 --rc geninfo_all_blocks=1 00:02:47.966 ' 00:02:47.966 10:24:55 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:47.966 --rc lcov_branch_coverage=1 00:02:47.966 --rc lcov_function_coverage=1 00:02:47.966 --rc genhtml_branch_coverage=1 00:02:47.966 --rc genhtml_function_coverage=1 00:02:47.966 --rc genhtml_legend=1 00:02:47.966 --rc geninfo_all_blocks=1 00:02:47.966 --no-external' 00:02:47.966 10:24:55 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:47.966 --rc lcov_branch_coverage=1 00:02:47.966 --rc lcov_function_coverage=1 00:02:47.966 --rc genhtml_branch_coverage=1 00:02:47.966 --rc genhtml_function_coverage=1 00:02:47.966 --rc genhtml_legend=1 00:02:47.966 --rc geninfo_all_blocks=1 00:02:47.966 --no-external' 00:02:47.966 10:24:55 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:48.223 lcov: LCOV version 1.14 00:02:48.223 10:24:55 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:00.485 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:00.485 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:08.588 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:08.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:08.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:08.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:11.117 10:25:18 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:11.117 10:25:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:11.117 10:25:18 -- common/autotest_common.sh@10 -- # set +x 00:03:11.117 10:25:18 -- spdk/autotest.sh@91 -- # rm -f 00:03:11.117 10:25:18 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.644 0000:5f:00.0 (8086 0a54): Already using the nvme driver 00:03:13.644 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:13.644 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:13.644 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:13.644 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:13.644 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:13.644 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:13.644 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:13.644 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:13.644 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:13.900 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:13.900 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:13.900 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:13.900 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:13.900 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:13.900 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:13.900 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:13.900 10:25:21 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:13.900 10:25:21 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:13.900 10:25:21 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:13.900 10:25:21 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:13.901 10:25:21 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:13.901 10:25:21 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:13.901 10:25:21 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:13.901 10:25:21 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:13.901 10:25:21 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:13.901 10:25:21 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:13.901 10:25:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:13.901 10:25:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:13.901 10:25:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:13.901 10:25:21 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:13.901 10:25:21 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:13.901 No valid GPT data, bailing 00:03:13.901 10:25:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:14.158 10:25:21 -- scripts/common.sh@391 -- # pt= 00:03:14.158 10:25:21 -- scripts/common.sh@392 -- # return 1 00:03:14.158 10:25:21 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:14.158 1+0 records in 00:03:14.158 1+0 records out 00:03:14.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00524557 s, 200 MB/s 00:03:14.158 10:25:21 -- spdk/autotest.sh@118 -- # sync 00:03:14.158 10:25:21 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:14.158 10:25:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:14.158 10:25:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:19.425 10:25:26 -- spdk/autotest.sh@124 -- # uname -s 00:03:19.425 10:25:26 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:19.425 10:25:26 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:19.425 10:25:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:19.425 10:25:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:19.425 10:25:26 -- common/autotest_common.sh@10 -- # set +x 00:03:19.425 ************************************ 00:03:19.425 START TEST setup.sh 00:03:19.425 ************************************ 00:03:19.425 10:25:26 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:19.425 * Looking for test storage... 00:03:19.425 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:19.425 10:25:26 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:19.425 10:25:26 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:19.425 10:25:26 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:19.425 10:25:26 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:19.425 10:25:26 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:19.425 10:25:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:19.425 ************************************ 00:03:19.425 START TEST acl 00:03:19.425 ************************************ 00:03:19.425 10:25:26 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:19.425 * Looking for test storage... 00:03:19.425 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:19.425 10:25:26 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:19.425 10:25:26 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:19.425 10:25:26 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:19.425 10:25:26 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:19.425 10:25:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:19.425 10:25:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:19.425 10:25:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:19.425 10:25:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:19.425 10:25:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:19.425 10:25:26 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:19.425 10:25:26 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:19.425 10:25:26 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:19.425 10:25:26 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:19.425 10:25:26 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:19.425 10:25:26 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:19.425 10:25:26 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:21.951 10:25:28 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:21.951 10:25:28 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:21.951 10:25:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.951 10:25:28 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:21.951 10:25:28 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.951 10:25:28 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:24.482 Hugepages 00:03:24.482 node hugesize free / total 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.482 00:03:24.482 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5f:00.0 == *:*:*.* ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:03:24.482 10:25:31 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:24.483 10:25:31 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:24.483 10:25:31 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:24.483 10:25:31 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:24.483 10:25:31 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:24.483 ************************************ 00:03:24.483 START TEST denied 00:03:24.483 ************************************ 00:03:24.483 10:25:31 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:24.483 10:25:31 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5f:00.0' 00:03:24.483 10:25:31 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:24.483 10:25:31 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5f:00.0' 00:03:24.483 10:25:31 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.483 10:25:31 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:27.014 0000:5f:00.0 (8086 0a54): Skipping denied controller at 0000:5f:00.0 00:03:27.014 10:25:34 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5f:00.0 00:03:27.014 10:25:34 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:27.014 10:25:34 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:27.014 10:25:34 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5f:00.0 ]] 00:03:27.014 10:25:34 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5f:00.0/driver 00:03:27.014 10:25:34 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:27.014 10:25:34 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:27.014 10:25:34 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:27.014 10:25:34 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:27.014 10:25:34 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.198 00:03:31.198 real 0m6.424s 00:03:31.198 user 0m2.033s 00:03:31.198 sys 0m3.653s 00:03:31.198 10:25:38 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:31.198 10:25:38 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:31.198 ************************************ 00:03:31.198 END TEST denied 00:03:31.198 ************************************ 00:03:31.198 10:25:38 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:31.198 10:25:38 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:31.198 10:25:38 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:31.198 10:25:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:31.198 ************************************ 00:03:31.198 START TEST allowed 00:03:31.198 ************************************ 00:03:31.198 10:25:38 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:31.198 10:25:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5f:00.0 00:03:31.198 10:25:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:31.199 10:25:38 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5f:00.0 .*: nvme -> .*' 00:03:31.199 10:25:38 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.199 10:25:38 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:35.434 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:03:35.434 10:25:42 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:35.434 10:25:42 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:35.434 10:25:42 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:35.434 10:25:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.434 10:25:42 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.719 00:03:38.719 real 0m7.207s 00:03:38.719 user 0m1.986s 00:03:38.719 sys 0m3.712s 00:03:38.719 10:25:45 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:38.719 10:25:45 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:38.719 ************************************ 00:03:38.719 END TEST allowed 00:03:38.719 ************************************ 00:03:38.719 00:03:38.719 real 0m19.222s 00:03:38.719 user 0m5.966s 00:03:38.719 sys 0m11.140s 00:03:38.719 10:25:45 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:38.719 10:25:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:38.719 ************************************ 00:03:38.719 END TEST acl 00:03:38.719 ************************************ 00:03:38.719 10:25:45 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:38.719 10:25:45 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:38.719 10:25:45 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:38.719 10:25:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:38.719 ************************************ 00:03:38.719 START TEST hugepages 00:03:38.719 ************************************ 00:03:38.719 10:25:45 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:38.719 * Looking for test storage... 00:03:38.719 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 166732584 kB' 'MemAvailable: 170048652 kB' 'Buffers: 4132 kB' 'Cached: 16112012 kB' 'SwapCached: 0 kB' 'Active: 12938116 kB' 'Inactive: 3710384 kB' 'Active(anon): 12459108 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535712 kB' 'Mapped: 179800 kB' 'Shmem: 11926752 kB' 'KReclaimable: 542256 kB' 'Slab: 1189236 kB' 'SReclaimable: 542256 kB' 'SUnreclaim: 646980 kB' 'KernelStack: 20624 kB' 'PageTables: 8872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982032 kB' 'Committed_AS: 13880932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316592 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.719 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.720 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:38.721 10:25:45 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:38.721 10:25:45 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:38.721 10:25:45 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:38.721 10:25:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:38.721 ************************************ 00:03:38.721 START TEST default_setup 00:03:38.721 ************************************ 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.721 10:25:45 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:41.254 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:41.254 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:41.254 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:41.254 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:41.254 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:41.254 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:41.254 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:41.254 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:41.254 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:41.254 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:41.254 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:41.254 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:41.254 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:41.254 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:41.254 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:41.254 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:42.633 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168865328 kB' 'MemAvailable: 172181316 kB' 'Buffers: 4132 kB' 'Cached: 16112128 kB' 'SwapCached: 0 kB' 'Active: 12955596 kB' 'Inactive: 3710384 kB' 'Active(anon): 12476588 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552928 kB' 'Mapped: 179720 kB' 'Shmem: 11926868 kB' 'KReclaimable: 542096 kB' 'Slab: 1187236 kB' 'SReclaimable: 542096 kB' 'SUnreclaim: 645140 kB' 'KernelStack: 20784 kB' 'PageTables: 8924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 13900368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316608 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168866624 kB' 'MemAvailable: 172182612 kB' 'Buffers: 4132 kB' 'Cached: 16112132 kB' 'SwapCached: 0 kB' 'Active: 12955692 kB' 'Inactive: 3710384 kB' 'Active(anon): 12476684 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553008 kB' 'Mapped: 179720 kB' 'Shmem: 11926872 kB' 'KReclaimable: 542096 kB' 'Slab: 1187204 kB' 'SReclaimable: 542096 kB' 'SUnreclaim: 645108 kB' 'KernelStack: 20896 kB' 'PageTables: 9440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 13898896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316640 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168867464 kB' 'MemAvailable: 172183452 kB' 'Buffers: 4132 kB' 'Cached: 16112148 kB' 'SwapCached: 0 kB' 'Active: 12955496 kB' 'Inactive: 3710384 kB' 'Active(anon): 12476488 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552824 kB' 'Mapped: 179712 kB' 'Shmem: 11926888 kB' 'KReclaimable: 542096 kB' 'Slab: 1187264 kB' 'SReclaimable: 542096 kB' 'SUnreclaim: 645168 kB' 'KernelStack: 20768 kB' 'PageTables: 9192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 13900408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316720 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.636 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.638 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.638 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.638 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.638 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.638 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.638 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.638 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.638 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.638 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.638 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.638 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.638 10:25:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.638 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.638 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.638 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.638 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.638 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.638 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:42.639 nr_hugepages=1024 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.639 resv_hugepages=0 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.639 surplus_hugepages=0 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.639 anon_hugepages=0 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168869256 kB' 'MemAvailable: 172185244 kB' 'Buffers: 4132 kB' 'Cached: 16112168 kB' 'SwapCached: 0 kB' 'Active: 12955184 kB' 'Inactive: 3710384 kB' 'Active(anon): 12476176 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552452 kB' 'Mapped: 179712 kB' 'Shmem: 11926908 kB' 'KReclaimable: 542096 kB' 'Slab: 1187232 kB' 'SReclaimable: 542096 kB' 'SUnreclaim: 645136 kB' 'KernelStack: 20672 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 13900428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316592 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.639 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.640 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85668308 kB' 'MemUsed: 11994376 kB' 'SwapCached: 0 kB' 'Active: 7891220 kB' 'Inactive: 252504 kB' 'Active(anon): 7694168 kB' 'Inactive(anon): 0 kB' 'Active(file): 197052 kB' 'Inactive(file): 252504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7728532 kB' 'Mapped: 119080 kB' 'AnonPages: 418428 kB' 'Shmem: 7278976 kB' 'KernelStack: 13192 kB' 'PageTables: 5408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 366716 kB' 'Slab: 683100 kB' 'SReclaimable: 366716 kB' 'SUnreclaim: 316384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:42.643 node0=1024 expecting 1024 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:42.643 00:03:42.643 real 0m4.371s 00:03:42.643 user 0m1.157s 00:03:42.643 sys 0m1.839s 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:42.643 10:25:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:42.643 ************************************ 00:03:42.643 END TEST default_setup 00:03:42.643 ************************************ 00:03:42.901 10:25:50 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:42.901 10:25:50 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:42.901 10:25:50 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.901 10:25:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:42.901 ************************************ 00:03:42.901 START TEST per_node_1G_alloc 00:03:42.901 ************************************ 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.901 10:25:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:45.429 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:45.429 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:45.429 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:45.429 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:45.429 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:45.429 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:45.429 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:45.429 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:45.429 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:45.429 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:45.429 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:45.429 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:45.429 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:45.429 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:45.429 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:45.429 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:45.429 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168873036 kB' 'MemAvailable: 172188968 kB' 'Buffers: 4132 kB' 'Cached: 16112268 kB' 'SwapCached: 0 kB' 'Active: 12954740 kB' 'Inactive: 3710384 kB' 'Active(anon): 12475732 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551428 kB' 'Mapped: 178920 kB' 'Shmem: 11927008 kB' 'KReclaimable: 541984 kB' 'Slab: 1186980 kB' 'SReclaimable: 541984 kB' 'SUnreclaim: 644996 kB' 'KernelStack: 20768 kB' 'PageTables: 9384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 13889164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316784 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.694 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.695 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168873976 kB' 'MemAvailable: 172189908 kB' 'Buffers: 4132 kB' 'Cached: 16112272 kB' 'SwapCached: 0 kB' 'Active: 12954532 kB' 'Inactive: 3710384 kB' 'Active(anon): 12475524 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551244 kB' 'Mapped: 178920 kB' 'Shmem: 11927012 kB' 'KReclaimable: 541984 kB' 'Slab: 1186936 kB' 'SReclaimable: 541984 kB' 'SUnreclaim: 644952 kB' 'KernelStack: 20544 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 13888052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316560 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.696 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.697 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168874064 kB' 'MemAvailable: 172189996 kB' 'Buffers: 4132 kB' 'Cached: 16112288 kB' 'SwapCached: 0 kB' 'Active: 12954120 kB' 'Inactive: 3710384 kB' 'Active(anon): 12475112 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551356 kB' 'Mapped: 178844 kB' 'Shmem: 11927028 kB' 'KReclaimable: 541984 kB' 'Slab: 1186864 kB' 'SReclaimable: 541984 kB' 'SUnreclaim: 644880 kB' 'KernelStack: 20720 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 13888072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316560 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.699 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:45.700 nr_hugepages=1024 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.700 resv_hugepages=0 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.700 surplus_hugepages=0 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.700 anon_hugepages=0 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.700 10:25:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168874756 kB' 'MemAvailable: 172190688 kB' 'Buffers: 4132 kB' 'Cached: 16112332 kB' 'SwapCached: 0 kB' 'Active: 12953844 kB' 'Inactive: 3710384 kB' 'Active(anon): 12474836 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551012 kB' 'Mapped: 178844 kB' 'Shmem: 11927072 kB' 'KReclaimable: 541984 kB' 'Slab: 1186864 kB' 'SReclaimable: 541984 kB' 'SUnreclaim: 644880 kB' 'KernelStack: 20720 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 13888096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316560 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.700 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.701 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86701032 kB' 'MemUsed: 10961652 kB' 'SwapCached: 0 kB' 'Active: 7891492 kB' 'Inactive: 252504 kB' 'Active(anon): 7694440 kB' 'Inactive(anon): 0 kB' 'Active(file): 197052 kB' 'Inactive(file): 252504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7728612 kB' 'Mapped: 118604 kB' 'AnonPages: 418536 kB' 'Shmem: 7279056 kB' 'KernelStack: 13304 kB' 'PageTables: 5756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 366692 kB' 'Slab: 682988 kB' 'SReclaimable: 366692 kB' 'SUnreclaim: 316296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.702 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.703 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.704 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.704 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718476 kB' 'MemFree: 82174876 kB' 'MemUsed: 11543600 kB' 'SwapCached: 0 kB' 'Active: 5062976 kB' 'Inactive: 3457880 kB' 'Active(anon): 4781020 kB' 'Inactive(anon): 0 kB' 'Active(file): 281956 kB' 'Inactive(file): 3457880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8387876 kB' 'Mapped: 60240 kB' 'AnonPages: 133092 kB' 'Shmem: 4648040 kB' 'KernelStack: 7400 kB' 'PageTables: 3048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175292 kB' 'Slab: 503876 kB' 'SReclaimable: 175292 kB' 'SUnreclaim: 328584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:45.706 node0=512 expecting 512 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:45.706 node1=512 expecting 512 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:45.706 00:03:45.706 real 0m2.952s 00:03:45.706 user 0m1.268s 00:03:45.706 sys 0m1.749s 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:45.706 10:25:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:45.706 ************************************ 00:03:45.706 END TEST per_node_1G_alloc 00:03:45.706 ************************************ 00:03:45.706 10:25:53 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:45.706 10:25:53 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:45.706 10:25:53 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:45.706 10:25:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.965 ************************************ 00:03:45.965 START TEST even_2G_alloc 00:03:45.965 ************************************ 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.965 10:25:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:48.494 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.494 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:48.494 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.494 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.494 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.495 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.495 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.495 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.495 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.495 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.495 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.495 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.495 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.495 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.495 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.495 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.495 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168861088 kB' 'MemAvailable: 172177020 kB' 'Buffers: 4132 kB' 'Cached: 16112416 kB' 'SwapCached: 0 kB' 'Active: 12955644 kB' 'Inactive: 3710384 kB' 'Active(anon): 12476636 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552280 kB' 'Mapped: 179320 kB' 'Shmem: 11927156 kB' 'KReclaimable: 541984 kB' 'Slab: 1186840 kB' 'SReclaimable: 541984 kB' 'SUnreclaim: 644856 kB' 'KernelStack: 20656 kB' 'PageTables: 9140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 13888440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316624 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.495 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:48.496 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168861056 kB' 'MemAvailable: 172176988 kB' 'Buffers: 4132 kB' 'Cached: 16112420 kB' 'SwapCached: 0 kB' 'Active: 12954692 kB' 'Inactive: 3710384 kB' 'Active(anon): 12475684 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551776 kB' 'Mapped: 178800 kB' 'Shmem: 11927160 kB' 'KReclaimable: 541984 kB' 'Slab: 1186804 kB' 'SReclaimable: 541984 kB' 'SUnreclaim: 644820 kB' 'KernelStack: 20624 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 13888460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316576 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.497 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.498 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168860804 kB' 'MemAvailable: 172176736 kB' 'Buffers: 4132 kB' 'Cached: 16112436 kB' 'SwapCached: 0 kB' 'Active: 12954296 kB' 'Inactive: 3710384 kB' 'Active(anon): 12475288 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551904 kB' 'Mapped: 178800 kB' 'Shmem: 11927176 kB' 'KReclaimable: 541984 kB' 'Slab: 1186804 kB' 'SReclaimable: 541984 kB' 'SUnreclaim: 644820 kB' 'KernelStack: 20640 kB' 'PageTables: 9088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 13888112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316544 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.499 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.500 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.501 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.501 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.501 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.501 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.501 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.501 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.501 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.501 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.501 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.501 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:48.501 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:48.501 nr_hugepages=1024 00:03:48.501 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.501 resv_hugepages=0 00:03:48.501 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.501 surplus_hugepages=0 00:03:48.501 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.501 anon_hugepages=0 00:03:48.501 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.501 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:48.501 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.501 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.501 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.501 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168861056 kB' 'MemAvailable: 172176988 kB' 'Buffers: 4132 kB' 'Cached: 16112460 kB' 'SwapCached: 0 kB' 'Active: 12954508 kB' 'Inactive: 3710384 kB' 'Active(anon): 12475500 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551552 kB' 'Mapped: 178800 kB' 'Shmem: 11927200 kB' 'KReclaimable: 541984 kB' 'Slab: 1186804 kB' 'SReclaimable: 541984 kB' 'SUnreclaim: 644820 kB' 'KernelStack: 20560 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 13888140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316528 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.764 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86680292 kB' 'MemUsed: 10982392 kB' 'SwapCached: 0 kB' 'Active: 7892108 kB' 'Inactive: 252504 kB' 'Active(anon): 7695056 kB' 'Inactive(anon): 0 kB' 'Active(file): 197052 kB' 'Inactive(file): 252504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7728636 kB' 'Mapped: 118612 kB' 'AnonPages: 419168 kB' 'Shmem: 7279080 kB' 'KernelStack: 13176 kB' 'PageTables: 5764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 366692 kB' 'Slab: 683116 kB' 'SReclaimable: 366692 kB' 'SUnreclaim: 316424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.765 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718476 kB' 'MemFree: 82183316 kB' 'MemUsed: 11535160 kB' 'SwapCached: 0 kB' 'Active: 5062788 kB' 'Inactive: 3457880 kB' 'Active(anon): 4780832 kB' 'Inactive(anon): 0 kB' 'Active(file): 281956 kB' 'Inactive(file): 3457880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8387996 kB' 'Mapped: 60188 kB' 'AnonPages: 132776 kB' 'Shmem: 4648160 kB' 'KernelStack: 7416 kB' 'PageTables: 3096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175292 kB' 'Slab: 503688 kB' 'SReclaimable: 175292 kB' 'SUnreclaim: 328396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:48.768 node0=512 expecting 512 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:48.768 node1=512 expecting 512 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:48.768 00:03:48.768 real 0m2.870s 00:03:48.768 user 0m1.086s 00:03:48.768 sys 0m1.841s 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:48.768 10:25:56 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:48.768 ************************************ 00:03:48.768 END TEST even_2G_alloc 00:03:48.768 ************************************ 00:03:48.768 10:25:56 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:48.768 10:25:56 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:48.768 10:25:56 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:48.768 10:25:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.768 ************************************ 00:03:48.768 START TEST odd_alloc 00:03:48.768 ************************************ 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.768 10:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:52.056 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.056 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:52.056 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.056 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.056 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.056 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.056 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.056 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.056 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.056 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.056 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.056 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.056 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.056 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.056 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.056 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.056 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.056 10:25:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:52.056 10:25:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:52.056 10:25:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.056 10:25:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.056 10:25:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:52.056 10:25:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:52.056 10:25:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:52.056 10:25:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.056 10:25:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.056 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.056 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:52.056 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:52.056 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.056 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.056 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.056 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.056 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.056 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.056 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.056 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168870656 kB' 'MemAvailable: 172186588 kB' 'Buffers: 4132 kB' 'Cached: 16112572 kB' 'SwapCached: 0 kB' 'Active: 12955728 kB' 'Inactive: 3710384 kB' 'Active(anon): 12476720 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552660 kB' 'Mapped: 178852 kB' 'Shmem: 11927312 kB' 'KReclaimable: 541984 kB' 'Slab: 1186776 kB' 'SReclaimable: 541984 kB' 'SUnreclaim: 644792 kB' 'KernelStack: 20656 kB' 'PageTables: 8664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 13889112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316608 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.057 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168878228 kB' 'MemAvailable: 172194160 kB' 'Buffers: 4132 kB' 'Cached: 16112576 kB' 'SwapCached: 0 kB' 'Active: 12955088 kB' 'Inactive: 3710384 kB' 'Active(anon): 12476080 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551988 kB' 'Mapped: 178844 kB' 'Shmem: 11927316 kB' 'KReclaimable: 541984 kB' 'Slab: 1186768 kB' 'SReclaimable: 541984 kB' 'SUnreclaim: 644784 kB' 'KernelStack: 20592 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 13888924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316608 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.058 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.059 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168875616 kB' 'MemAvailable: 172191548 kB' 'Buffers: 4132 kB' 'Cached: 16112576 kB' 'SwapCached: 0 kB' 'Active: 12956096 kB' 'Inactive: 3710384 kB' 'Active(anon): 12477088 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552996 kB' 'Mapped: 179348 kB' 'Shmem: 11927316 kB' 'KReclaimable: 541984 kB' 'Slab: 1186768 kB' 'SReclaimable: 541984 kB' 'SUnreclaim: 644784 kB' 'KernelStack: 20592 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 13891168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316576 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.060 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.061 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.062 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.062 10:25:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:52.062 nr_hugepages=1025 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.062 resv_hugepages=0 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.062 surplus_hugepages=0 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.062 anon_hugepages=0 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168872088 kB' 'MemAvailable: 172188020 kB' 'Buffers: 4132 kB' 'Cached: 16112576 kB' 'SwapCached: 0 kB' 'Active: 12959628 kB' 'Inactive: 3710384 kB' 'Active(anon): 12480620 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556556 kB' 'Mapped: 179324 kB' 'Shmem: 11927316 kB' 'KReclaimable: 541984 kB' 'Slab: 1186760 kB' 'SReclaimable: 541984 kB' 'SUnreclaim: 644776 kB' 'KernelStack: 20592 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 13895292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316576 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.062 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.063 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86691844 kB' 'MemUsed: 10970840 kB' 'SwapCached: 0 kB' 'Active: 7898304 kB' 'Inactive: 252504 kB' 'Active(anon): 7701252 kB' 'Inactive(anon): 0 kB' 'Active(file): 197052 kB' 'Inactive(file): 252504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7728672 kB' 'Mapped: 119520 kB' 'AnonPages: 425376 kB' 'Shmem: 7279116 kB' 'KernelStack: 13208 kB' 'PageTables: 5496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 366692 kB' 'Slab: 683064 kB' 'SReclaimable: 366692 kB' 'SUnreclaim: 316372 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.064 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.065 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718476 kB' 'MemFree: 82181416 kB' 'MemUsed: 11537060 kB' 'SwapCached: 0 kB' 'Active: 5062300 kB' 'Inactive: 3457880 kB' 'Active(anon): 4780344 kB' 'Inactive(anon): 0 kB' 'Active(file): 281956 kB' 'Inactive(file): 3457880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8388092 kB' 'Mapped: 60204 kB' 'AnonPages: 132148 kB' 'Shmem: 4648256 kB' 'KernelStack: 7384 kB' 'PageTables: 3000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175292 kB' 'Slab: 503692 kB' 'SReclaimable: 175292 kB' 'SUnreclaim: 328400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.066 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:52.067 node0=512 expecting 513 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:52.067 node1=513 expecting 512 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:52.067 00:03:52.067 real 0m3.000s 00:03:52.067 user 0m1.199s 00:03:52.067 sys 0m1.869s 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.067 10:25:59 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:52.067 ************************************ 00:03:52.067 END TEST odd_alloc 00:03:52.067 ************************************ 00:03:52.067 10:25:59 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:52.067 10:25:59 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.067 10:25:59 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.067 10:25:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.067 ************************************ 00:03:52.067 START TEST custom_alloc 00:03:52.067 ************************************ 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:52.067 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:52.068 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:52.068 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.068 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.068 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:52.068 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.068 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.068 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.068 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.068 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:52.068 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:52.068 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:52.068 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:52.068 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:52.068 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:52.068 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:52.068 10:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:52.068 10:25:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.068 10:25:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:54.597 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:54.597 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:54.597 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:54.597 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:54.597 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:54.597 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:54.597 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:54.597 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:54.597 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:54.597 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:54.597 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:54.597 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:54.597 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:54.597 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:54.597 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:54.597 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:54.597 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:54.597 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:54.597 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:54.597 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:54.597 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.597 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.597 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:54.597 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:54.597 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:54.597 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 167813860 kB' 'MemAvailable: 171129748 kB' 'Buffers: 4132 kB' 'Cached: 16112732 kB' 'SwapCached: 0 kB' 'Active: 12955656 kB' 'Inactive: 3710384 kB' 'Active(anon): 12476648 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552396 kB' 'Mapped: 178920 kB' 'Shmem: 11927472 kB' 'KReclaimable: 541896 kB' 'Slab: 1186876 kB' 'SReclaimable: 541896 kB' 'SUnreclaim: 644980 kB' 'KernelStack: 20640 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 13889660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316640 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.598 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.599 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 167817264 kB' 'MemAvailable: 171133152 kB' 'Buffers: 4132 kB' 'Cached: 16112736 kB' 'SwapCached: 0 kB' 'Active: 12955172 kB' 'Inactive: 3710384 kB' 'Active(anon): 12476164 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551892 kB' 'Mapped: 178888 kB' 'Shmem: 11927476 kB' 'KReclaimable: 541896 kB' 'Slab: 1186844 kB' 'SReclaimable: 541896 kB' 'SUnreclaim: 644948 kB' 'KernelStack: 20592 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 13889676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316592 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.862 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.863 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 167817092 kB' 'MemAvailable: 171132980 kB' 'Buffers: 4132 kB' 'Cached: 16112736 kB' 'SwapCached: 0 kB' 'Active: 12955140 kB' 'Inactive: 3710384 kB' 'Active(anon): 12476132 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551860 kB' 'Mapped: 178828 kB' 'Shmem: 11927476 kB' 'KReclaimable: 541896 kB' 'Slab: 1186888 kB' 'SReclaimable: 541896 kB' 'SUnreclaim: 644992 kB' 'KernelStack: 20592 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 13889700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316592 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.864 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.865 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:54.866 nr_hugepages=1536 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.866 resv_hugepages=0 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.866 surplus_hugepages=0 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.866 anon_hugepages=0 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.866 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 167816840 kB' 'MemAvailable: 171132728 kB' 'Buffers: 4132 kB' 'Cached: 16112772 kB' 'SwapCached: 0 kB' 'Active: 12954932 kB' 'Inactive: 3710384 kB' 'Active(anon): 12475924 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551636 kB' 'Mapped: 178828 kB' 'Shmem: 11927512 kB' 'KReclaimable: 541896 kB' 'Slab: 1186888 kB' 'SReclaimable: 541896 kB' 'SUnreclaim: 644992 kB' 'KernelStack: 20576 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 13889720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316592 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.867 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.868 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86692972 kB' 'MemUsed: 10969712 kB' 'SwapCached: 0 kB' 'Active: 7892868 kB' 'Inactive: 252504 kB' 'Active(anon): 7695816 kB' 'Inactive(anon): 0 kB' 'Active(file): 197052 kB' 'Inactive(file): 252504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7728712 kB' 'Mapped: 118616 kB' 'AnonPages: 419824 kB' 'Shmem: 7279156 kB' 'KernelStack: 13192 kB' 'PageTables: 5460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 366636 kB' 'Slab: 683292 kB' 'SReclaimable: 366636 kB' 'SUnreclaim: 316656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.869 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718476 kB' 'MemFree: 81123936 kB' 'MemUsed: 12594540 kB' 'SwapCached: 0 kB' 'Active: 5062752 kB' 'Inactive: 3457880 kB' 'Active(anon): 4780796 kB' 'Inactive(anon): 0 kB' 'Active(file): 281956 kB' 'Inactive(file): 3457880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8388240 kB' 'Mapped: 60212 kB' 'AnonPages: 132452 kB' 'Shmem: 4648404 kB' 'KernelStack: 7400 kB' 'PageTables: 3052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175260 kB' 'Slab: 503596 kB' 'SReclaimable: 175260 kB' 'SUnreclaim: 328336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.870 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.871 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:54.872 node0=512 expecting 512 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:54.872 node1=1024 expecting 1024 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:54.872 00:03:54.872 real 0m3.039s 00:03:54.872 user 0m1.261s 00:03:54.872 sys 0m1.842s 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.872 10:26:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:54.872 ************************************ 00:03:54.872 END TEST custom_alloc 00:03:54.872 ************************************ 00:03:54.872 10:26:02 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:54.872 10:26:02 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:54.872 10:26:02 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.872 10:26:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.872 ************************************ 00:03:54.872 START TEST no_shrink_alloc 00:03:54.872 ************************************ 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.872 10:26:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:58.164 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:58.165 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:58.165 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:58.165 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:58.165 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:58.165 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:58.165 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:58.165 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:58.165 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:58.165 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:58.165 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:58.165 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:58.165 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:58.165 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:58.165 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:58.165 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:58.165 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168848364 kB' 'MemAvailable: 172164252 kB' 'Buffers: 4132 kB' 'Cached: 16112880 kB' 'SwapCached: 0 kB' 'Active: 12957156 kB' 'Inactive: 3710384 kB' 'Active(anon): 12478148 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553672 kB' 'Mapped: 178964 kB' 'Shmem: 11927620 kB' 'KReclaimable: 541896 kB' 'Slab: 1186888 kB' 'SReclaimable: 541896 kB' 'SUnreclaim: 644992 kB' 'KernelStack: 20880 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 13893304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316800 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.165 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168853136 kB' 'MemAvailable: 172169024 kB' 'Buffers: 4132 kB' 'Cached: 16112884 kB' 'SwapCached: 0 kB' 'Active: 12956640 kB' 'Inactive: 3710384 kB' 'Active(anon): 12477632 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553204 kB' 'Mapped: 178840 kB' 'Shmem: 11927624 kB' 'KReclaimable: 541896 kB' 'Slab: 1186880 kB' 'SReclaimable: 541896 kB' 'SUnreclaim: 644984 kB' 'KernelStack: 20768 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 13893320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316720 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.166 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.167 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168853772 kB' 'MemAvailable: 172169660 kB' 'Buffers: 4132 kB' 'Cached: 16112904 kB' 'SwapCached: 0 kB' 'Active: 12956308 kB' 'Inactive: 3710384 kB' 'Active(anon): 12477300 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552868 kB' 'Mapped: 178840 kB' 'Shmem: 11927644 kB' 'KReclaimable: 541896 kB' 'Slab: 1186792 kB' 'SReclaimable: 541896 kB' 'SUnreclaim: 644896 kB' 'KernelStack: 20656 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 13893344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316736 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.168 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.169 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:58.170 nr_hugepages=1024 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.170 resv_hugepages=0 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.170 surplus_hugepages=0 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.170 anon_hugepages=0 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.170 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168855756 kB' 'MemAvailable: 172171644 kB' 'Buffers: 4132 kB' 'Cached: 16112924 kB' 'SwapCached: 0 kB' 'Active: 12956832 kB' 'Inactive: 3710384 kB' 'Active(anon): 12477824 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552888 kB' 'Mapped: 178848 kB' 'Shmem: 11927664 kB' 'KReclaimable: 541896 kB' 'Slab: 1186760 kB' 'SReclaimable: 541896 kB' 'SUnreclaim: 644864 kB' 'KernelStack: 20720 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 13893364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316720 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.171 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.172 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85648552 kB' 'MemUsed: 12014132 kB' 'SwapCached: 0 kB' 'Active: 7892872 kB' 'Inactive: 252504 kB' 'Active(anon): 7695820 kB' 'Inactive(anon): 0 kB' 'Active(file): 197052 kB' 'Inactive(file): 252504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7728720 kB' 'Mapped: 118616 kB' 'AnonPages: 419692 kB' 'Shmem: 7279164 kB' 'KernelStack: 13336 kB' 'PageTables: 5656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 366636 kB' 'Slab: 683104 kB' 'SReclaimable: 366636 kB' 'SUnreclaim: 316468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.173 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:58.174 node0=1024 expecting 1024 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.174 10:26:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:00.754 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:00.754 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:00.754 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:00.754 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:00.754 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:00.754 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:00.754 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:00.754 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:00.754 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:00.754 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:00.754 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:00.754 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:00.754 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:00.754 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:00.754 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:00.754 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:00.754 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:00.754 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168864688 kB' 'MemAvailable: 172180576 kB' 'Buffers: 4132 kB' 'Cached: 16113008 kB' 'SwapCached: 0 kB' 'Active: 12957380 kB' 'Inactive: 3710384 kB' 'Active(anon): 12478372 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553272 kB' 'Mapped: 179000 kB' 'Shmem: 11927748 kB' 'KReclaimable: 541896 kB' 'Slab: 1186864 kB' 'SReclaimable: 541896 kB' 'SUnreclaim: 644968 kB' 'KernelStack: 20784 kB' 'PageTables: 8916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 13893688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316800 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168864968 kB' 'MemAvailable: 172180856 kB' 'Buffers: 4132 kB' 'Cached: 16113012 kB' 'SwapCached: 0 kB' 'Active: 12957068 kB' 'Inactive: 3710384 kB' 'Active(anon): 12478060 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553444 kB' 'Mapped: 178876 kB' 'Shmem: 11927752 kB' 'KReclaimable: 541896 kB' 'Slab: 1186808 kB' 'SReclaimable: 541896 kB' 'SUnreclaim: 644912 kB' 'KernelStack: 20800 kB' 'PageTables: 9636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 13893708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316784 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.757 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168867372 kB' 'MemAvailable: 172183260 kB' 'Buffers: 4132 kB' 'Cached: 16113028 kB' 'SwapCached: 0 kB' 'Active: 12958368 kB' 'Inactive: 3710384 kB' 'Active(anon): 12479360 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554764 kB' 'Mapped: 178876 kB' 'Shmem: 11927768 kB' 'KReclaimable: 541896 kB' 'Slab: 1186460 kB' 'SReclaimable: 541896 kB' 'SUnreclaim: 644564 kB' 'KernelStack: 21344 kB' 'PageTables: 10936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 13893728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316832 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.758 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:00.760 nr_hugepages=1024 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.760 resv_hugepages=0 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.760 surplus_hugepages=0 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.760 anon_hugepages=0 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168875988 kB' 'MemAvailable: 172191876 kB' 'Buffers: 4132 kB' 'Cached: 16113052 kB' 'SwapCached: 0 kB' 'Active: 12958572 kB' 'Inactive: 3710384 kB' 'Active(anon): 12479564 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554928 kB' 'Mapped: 178876 kB' 'Shmem: 11927792 kB' 'KReclaimable: 541896 kB' 'Slab: 1186268 kB' 'SReclaimable: 541896 kB' 'SUnreclaim: 644372 kB' 'KernelStack: 21344 kB' 'PageTables: 11200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 13891136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316752 kB' 'VmallocChunk: 0 kB' 'Percpu: 91776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3367892 kB' 'DirectMap2M: 43497472 kB' 'DirectMap1G: 155189248 kB' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85665172 kB' 'MemUsed: 11997512 kB' 'SwapCached: 0 kB' 'Active: 7892156 kB' 'Inactive: 252504 kB' 'Active(anon): 7695104 kB' 'Inactive(anon): 0 kB' 'Active(file): 197052 kB' 'Inactive(file): 252504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7728724 kB' 'Mapped: 118644 kB' 'AnonPages: 419060 kB' 'Shmem: 7279168 kB' 'KernelStack: 13240 kB' 'PageTables: 5556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 366636 kB' 'Slab: 682428 kB' 'SReclaimable: 366636 kB' 'SUnreclaim: 315792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.762 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.763 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.764 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.764 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.764 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.764 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.764 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.764 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.764 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.764 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.764 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:00.764 node0=1024 expecting 1024 00:04:00.764 10:26:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:00.764 00:04:00.764 real 0m5.868s 00:04:00.764 user 0m2.325s 00:04:00.764 sys 0m3.663s 00:04:00.764 10:26:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.764 10:26:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.764 ************************************ 00:04:00.764 END TEST no_shrink_alloc 00:04:00.764 ************************************ 00:04:00.764 10:26:08 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:00.764 10:26:08 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:00.764 10:26:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:00.764 10:26:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:00.764 10:26:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:00.764 10:26:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:00.764 10:26:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:00.764 10:26:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:00.764 10:26:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:00.764 10:26:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:00.764 10:26:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:00.764 10:26:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:00.764 10:26:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:00.764 10:26:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:00.764 00:04:00.764 real 0m22.654s 00:04:00.764 user 0m8.528s 00:04:00.764 sys 0m13.163s 00:04:00.764 10:26:08 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.764 10:26:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.764 ************************************ 00:04:00.764 END TEST hugepages 00:04:00.764 ************************************ 00:04:01.022 10:26:08 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:01.022 10:26:08 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.022 10:26:08 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.022 10:26:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:01.022 ************************************ 00:04:01.022 START TEST driver 00:04:01.022 ************************************ 00:04:01.022 10:26:08 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:01.022 * Looking for test storage... 00:04:01.022 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:01.022 10:26:08 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:01.022 10:26:08 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.022 10:26:08 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:05.207 10:26:12 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:05.207 10:26:12 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.207 10:26:12 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.207 10:26:12 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:05.207 ************************************ 00:04:05.207 START TEST guess_driver 00:04:05.207 ************************************ 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:05.207 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:05.207 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:05.207 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:05.207 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:05.207 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:05.207 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:05.207 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:05.207 Looking for driver=vfio-pci 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.207 10:26:12 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.736 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.994 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.994 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.994 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.994 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.994 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.994 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.994 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.994 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.994 10:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.372 10:26:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.372 10:26:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.372 10:26:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.372 10:26:16 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:09.372 10:26:16 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:09.372 10:26:16 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.372 10:26:16 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.559 00:04:13.559 real 0m8.229s 00:04:13.559 user 0m2.241s 00:04:13.559 sys 0m3.870s 00:04:13.559 10:26:20 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.559 10:26:20 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:13.559 ************************************ 00:04:13.559 END TEST guess_driver 00:04:13.559 ************************************ 00:04:13.559 00:04:13.559 real 0m12.365s 00:04:13.559 user 0m3.434s 00:04:13.559 sys 0m6.053s 00:04:13.559 10:26:20 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.559 10:26:20 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:13.559 ************************************ 00:04:13.559 END TEST driver 00:04:13.559 ************************************ 00:04:13.559 10:26:20 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:13.559 10:26:20 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.559 10:26:20 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.559 10:26:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:13.559 ************************************ 00:04:13.559 START TEST devices 00:04:13.559 ************************************ 00:04:13.559 10:26:20 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:13.559 * Looking for test storage... 00:04:13.559 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:13.559 10:26:20 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:13.559 10:26:20 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:13.559 10:26:20 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.559 10:26:20 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.085 10:26:23 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:16.085 10:26:23 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:16.085 10:26:23 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:16.085 10:26:23 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:16.085 10:26:23 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:16.085 10:26:23 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:16.085 10:26:23 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:16.085 10:26:23 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:16.085 10:26:23 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:16.085 10:26:23 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:16.085 10:26:23 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:16.085 10:26:23 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:16.085 10:26:23 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:16.085 10:26:23 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:16.085 10:26:23 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:16.085 10:26:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:16.085 10:26:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:16.085 10:26:23 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:04:16.085 10:26:23 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:04:16.085 10:26:23 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:16.085 10:26:23 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:16.085 10:26:23 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:16.085 No valid GPT data, bailing 00:04:16.344 10:26:23 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:16.344 10:26:23 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:16.344 10:26:23 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:16.344 10:26:23 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:16.344 10:26:23 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:16.344 10:26:23 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:16.344 10:26:23 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:16.344 10:26:23 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:16.344 10:26:23 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:16.344 10:26:23 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5f:00.0 00:04:16.344 10:26:23 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:16.344 10:26:23 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:16.344 10:26:23 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:16.344 10:26:23 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.344 10:26:23 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.344 10:26:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:16.344 ************************************ 00:04:16.344 START TEST nvme_mount 00:04:16.344 ************************************ 00:04:16.344 10:26:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:16.344 10:26:23 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:16.344 10:26:23 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:16.344 10:26:23 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.344 10:26:23 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:16.344 10:26:23 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:16.344 10:26:23 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:16.344 10:26:23 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:16.344 10:26:23 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:16.344 10:26:23 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:16.344 10:26:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:16.344 10:26:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:16.344 10:26:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:16.344 10:26:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.344 10:26:23 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:16.344 10:26:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:16.344 10:26:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.344 10:26:23 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:16.344 10:26:23 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:16.344 10:26:23 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:17.278 Creating new GPT entries in memory. 00:04:17.278 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:17.278 other utilities. 00:04:17.278 10:26:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:17.278 10:26:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.279 10:26:24 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:17.279 10:26:24 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:17.279 10:26:24 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:18.212 Creating new GPT entries in memory. 00:04:18.212 The operation has completed successfully. 00:04:18.212 10:26:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:18.213 10:26:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.213 10:26:25 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2021318 00:04:18.213 10:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.213 10:26:25 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:18.213 10:26:25 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.213 10:26:25 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:18.213 10:26:25 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:18.471 10:26:25 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.471 10:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.471 10:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:18.471 10:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:18.471 10:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.471 10:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.471 10:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:18.471 10:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.471 10:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:18.471 10:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:18.471 10:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.471 10:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:18.471 10:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:18.471 10:26:25 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.471 10:26:25 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:21.002 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:21.002 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:21.260 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:21.260 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:21.260 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:21.260 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:21.260 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:21.260 10:26:28 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:21.260 10:26:28 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.261 10:26:28 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:21.261 10:26:28 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:21.261 10:26:28 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.535 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:21.535 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:21.535 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:21.535 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.535 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:21.535 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:21.535 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:21.535 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:21.535 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:21.535 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.535 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:21.535 10:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:21.535 10:26:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.535 10:26:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5f:00.0 data@nvme0n1 '' '' 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.062 10:26:31 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:27.344 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:27.344 00:04:27.344 real 0m10.683s 00:04:27.344 user 0m3.125s 00:04:27.344 sys 0m5.393s 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.344 10:26:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:27.344 ************************************ 00:04:27.344 END TEST nvme_mount 00:04:27.344 ************************************ 00:04:27.344 10:26:34 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:27.345 10:26:34 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.345 10:26:34 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.345 10:26:34 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:27.345 ************************************ 00:04:27.345 START TEST dm_mount 00:04:27.345 ************************************ 00:04:27.345 10:26:34 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:27.345 10:26:34 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:27.345 10:26:34 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:27.345 10:26:34 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:27.345 10:26:34 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:27.345 10:26:34 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:27.345 10:26:34 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:27.345 10:26:34 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:27.345 10:26:34 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:27.345 10:26:34 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:27.345 10:26:34 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:27.345 10:26:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:27.345 10:26:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.345 10:26:34 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:27.345 10:26:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:27.345 10:26:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.345 10:26:34 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:27.345 10:26:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:27.345 10:26:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.345 10:26:34 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:27.345 10:26:34 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:27.345 10:26:34 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:27.912 Creating new GPT entries in memory. 00:04:27.912 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:27.912 other utilities. 00:04:27.912 10:26:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:27.912 10:26:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.912 10:26:35 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:27.912 10:26:35 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:27.912 10:26:35 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:29.284 Creating new GPT entries in memory. 00:04:29.284 The operation has completed successfully. 00:04:29.284 10:26:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:29.284 10:26:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.284 10:26:36 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:29.284 10:26:36 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:29.284 10:26:36 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:30.219 The operation has completed successfully. 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2025499 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5f:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.219 10:26:37 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.829 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.088 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.088 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:33.088 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:33.088 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:33.088 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:33.088 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:33.088 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5f:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:33.088 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:33.088 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:33.088 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:33.088 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:33.088 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:33.088 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:33.088 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:33.088 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.088 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:33.088 10:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:33.088 10:26:40 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.088 10:26:40 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:35.619 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:35.619 00:04:35.619 real 0m8.630s 00:04:35.619 user 0m2.018s 00:04:35.619 sys 0m3.526s 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.619 10:26:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:35.619 ************************************ 00:04:35.619 END TEST dm_mount 00:04:35.619 ************************************ 00:04:35.619 10:26:42 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:35.619 10:26:42 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:35.619 10:26:42 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.619 10:26:43 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:35.619 10:26:43 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:35.619 10:26:43 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:35.619 10:26:43 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:35.877 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:35.877 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:35.877 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:35.877 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:35.877 10:26:43 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:35.877 10:26:43 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:35.877 10:26:43 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:35.877 10:26:43 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:35.877 10:26:43 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:35.877 10:26:43 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:35.877 10:26:43 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:35.877 00:04:35.877 real 0m22.606s 00:04:35.877 user 0m6.205s 00:04:35.877 sys 0m10.847s 00:04:35.877 10:26:43 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.877 10:26:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:35.877 ************************************ 00:04:35.877 END TEST devices 00:04:35.877 ************************************ 00:04:35.877 00:04:35.877 real 1m17.218s 00:04:35.877 user 0m24.278s 00:04:35.877 sys 0m41.457s 00:04:35.878 10:26:43 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.878 10:26:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:35.878 ************************************ 00:04:35.878 END TEST setup.sh 00:04:35.878 ************************************ 00:04:36.135 10:26:43 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:38.667 Hugepages 00:04:38.667 node hugesize free / total 00:04:38.667 node0 1048576kB 0 / 0 00:04:38.667 node0 2048kB 2048 / 2048 00:04:38.667 node1 1048576kB 0 / 0 00:04:38.667 node1 2048kB 0 / 0 00:04:38.667 00:04:38.667 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:38.667 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:38.667 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:38.667 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:38.667 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:38.667 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:38.667 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:38.667 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:38.667 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:38.667 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:38.667 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:38.667 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:38.667 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:38.667 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:38.667 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:38.667 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:38.667 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:38.667 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:38.667 10:26:45 -- spdk/autotest.sh@130 -- # uname -s 00:04:38.667 10:26:45 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:38.667 10:26:45 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:38.667 10:26:45 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:41.198 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:41.198 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:41.198 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:41.198 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:41.198 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:41.198 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:41.198 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:41.198 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:41.198 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:41.198 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:41.198 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:41.455 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:41.455 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:41.455 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:41.455 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:41.455 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:42.832 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:42.832 10:26:50 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:44.208 10:26:51 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:44.208 10:26:51 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:44.208 10:26:51 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:44.208 10:26:51 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:44.208 10:26:51 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:44.208 10:26:51 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:44.208 10:26:51 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:44.208 10:26:51 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:44.208 10:26:51 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:44.208 10:26:51 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:44.208 10:26:51 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5f:00.0 00:04:44.208 10:26:51 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:46.737 Waiting for block devices as requested 00:04:46.737 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:04:46.737 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:46.737 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:46.737 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:46.737 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:46.994 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:46.994 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:46.994 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:46.994 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:47.252 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:47.252 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:47.252 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:47.511 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:47.511 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:47.511 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:47.511 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:47.769 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:47.769 10:26:55 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:47.769 10:26:55 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5f:00.0 00:04:47.769 10:26:55 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:47.769 10:26:55 -- common/autotest_common.sh@1502 -- # grep 0000:5f:00.0/nvme/nvme 00:04:47.769 10:26:55 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:04:47.769 10:26:55 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 ]] 00:04:47.769 10:26:55 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:04:47.769 10:26:55 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:47.769 10:26:55 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:47.769 10:26:55 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:47.769 10:26:55 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:47.769 10:26:55 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:47.769 10:26:55 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:47.769 10:26:55 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:47.769 10:26:55 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:47.769 10:26:55 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:47.769 10:26:55 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:47.769 10:26:55 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:47.769 10:26:55 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:47.769 10:26:55 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:47.769 10:26:55 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:47.769 10:26:55 -- common/autotest_common.sh@1557 -- # continue 00:04:47.769 10:26:55 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:47.769 10:26:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:47.769 10:26:55 -- common/autotest_common.sh@10 -- # set +x 00:04:47.769 10:26:55 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:47.769 10:26:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:47.769 10:26:55 -- common/autotest_common.sh@10 -- # set +x 00:04:47.769 10:26:55 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:51.052 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:51.052 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:51.052 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:51.052 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:51.052 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:51.052 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:51.052 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:51.052 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:51.052 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:51.052 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:51.052 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:51.052 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:51.052 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:51.052 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:51.052 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:51.052 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:51.986 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:52.245 10:26:59 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:52.245 10:26:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:52.245 10:26:59 -- common/autotest_common.sh@10 -- # set +x 00:04:52.245 10:26:59 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:52.245 10:26:59 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:52.245 10:26:59 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:52.245 10:26:59 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:52.245 10:26:59 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:52.245 10:26:59 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:52.245 10:26:59 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:52.245 10:26:59 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:52.245 10:26:59 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:52.245 10:26:59 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:52.245 10:26:59 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:52.245 10:26:59 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:52.245 10:26:59 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5f:00.0 00:04:52.245 10:26:59 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:52.245 10:26:59 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5f:00.0/device 00:04:52.245 10:26:59 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:52.245 10:26:59 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:52.245 10:26:59 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:52.245 10:26:59 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5f:00.0 00:04:52.245 10:26:59 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5f:00.0 ]] 00:04:52.245 10:26:59 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2034524 00:04:52.245 10:26:59 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.245 10:26:59 -- common/autotest_common.sh@1598 -- # waitforlisten 2034524 00:04:52.245 10:26:59 -- common/autotest_common.sh@831 -- # '[' -z 2034524 ']' 00:04:52.245 10:26:59 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.245 10:26:59 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:52.245 10:26:59 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.245 10:26:59 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:52.245 10:26:59 -- common/autotest_common.sh@10 -- # set +x 00:04:52.245 [2024-07-24 10:26:59.669605] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:04:52.245 [2024-07-24 10:26:59.669650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2034524 ] 00:04:52.245 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.503 [2024-07-24 10:26:59.722905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.503 [2024-07-24 10:26:59.764222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.503 10:26:59 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.503 10:26:59 -- common/autotest_common.sh@864 -- # return 0 00:04:52.503 10:26:59 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:52.503 10:26:59 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:52.503 10:26:59 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5f:00.0 00:04:55.786 nvme0n1 00:04:55.786 10:27:02 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:55.786 [2024-07-24 10:27:03.092307] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:55.786 request: 00:04:55.786 { 00:04:55.786 "nvme_ctrlr_name": "nvme0", 00:04:55.786 "password": "test", 00:04:55.786 "method": "bdev_nvme_opal_revert", 00:04:55.786 "req_id": 1 00:04:55.786 } 00:04:55.786 Got JSON-RPC error response 00:04:55.786 response: 00:04:55.786 { 00:04:55.786 "code": -32602, 00:04:55.786 "message": "Invalid parameters" 00:04:55.786 } 00:04:55.786 10:27:03 -- common/autotest_common.sh@1604 -- # true 00:04:55.786 10:27:03 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:55.786 10:27:03 -- common/autotest_common.sh@1608 -- # killprocess 2034524 00:04:55.786 10:27:03 -- common/autotest_common.sh@950 -- # '[' -z 2034524 ']' 00:04:55.786 10:27:03 -- common/autotest_common.sh@954 -- # kill -0 2034524 00:04:55.786 10:27:03 -- common/autotest_common.sh@955 -- # uname 00:04:55.786 10:27:03 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:55.786 10:27:03 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2034524 00:04:55.786 10:27:03 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:55.786 10:27:03 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:55.786 10:27:03 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2034524' 00:04:55.786 killing process with pid 2034524 00:04:55.786 10:27:03 -- common/autotest_common.sh@969 -- # kill 2034524 00:04:55.786 10:27:03 -- common/autotest_common.sh@974 -- # wait 2034524 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.786 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:55.787 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.058 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:57.962 10:27:05 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:57.962 10:27:05 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:57.962 10:27:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:57.962 10:27:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:57.962 10:27:05 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:57.962 10:27:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:57.962 10:27:05 -- common/autotest_common.sh@10 -- # set +x 00:04:57.962 10:27:05 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:57.962 10:27:05 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:57.962 10:27:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.962 10:27:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.962 10:27:05 -- common/autotest_common.sh@10 -- # set +x 00:04:57.962 ************************************ 00:04:57.962 START TEST env 00:04:57.962 ************************************ 00:04:57.962 10:27:05 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:57.962 * Looking for test storage... 00:04:57.962 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:04:57.962 10:27:05 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:57.962 10:27:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.234 10:27:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.234 10:27:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.234 ************************************ 00:04:58.234 START TEST env_memory 00:04:58.234 ************************************ 00:04:58.234 10:27:05 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:58.234 00:04:58.234 00:04:58.234 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.234 http://cunit.sourceforge.net/ 00:04:58.234 00:04:58.234 00:04:58.234 Suite: memory 00:04:58.234 Test: alloc and free memory map ...[2024-07-24 10:27:05.492724] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:58.234 passed 00:04:58.234 Test: mem map translation ...[2024-07-24 10:27:05.510276] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:58.234 [2024-07-24 10:27:05.510291] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:58.234 [2024-07-24 10:27:05.510325] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:58.234 [2024-07-24 10:27:05.510331] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:58.234 passed 00:04:58.234 Test: mem map registration ...[2024-07-24 10:27:05.545857] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:58.234 [2024-07-24 10:27:05.545870] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:58.234 passed 00:04:58.234 Test: mem map adjacent registrations ...passed 00:04:58.234 00:04:58.234 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.234 suites 1 1 n/a 0 0 00:04:58.234 tests 4 4 4 0 0 00:04:58.234 asserts 152 152 152 0 n/a 00:04:58.234 00:04:58.234 Elapsed time = 0.131 seconds 00:04:58.234 00:04:58.234 real 0m0.143s 00:04:58.234 user 0m0.134s 00:04:58.234 sys 0m0.009s 00:04:58.234 10:27:05 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.234 10:27:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:58.234 ************************************ 00:04:58.234 END TEST env_memory 00:04:58.234 ************************************ 00:04:58.234 10:27:05 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:58.234 10:27:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.234 10:27:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.234 10:27:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.234 ************************************ 00:04:58.234 START TEST env_vtophys 00:04:58.234 ************************************ 00:04:58.234 10:27:05 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:58.234 EAL: lib.eal log level changed from notice to debug 00:04:58.234 EAL: Detected lcore 0 as core 0 on socket 0 00:04:58.234 EAL: Detected lcore 1 as core 1 on socket 0 00:04:58.234 EAL: Detected lcore 2 as core 2 on socket 0 00:04:58.234 EAL: Detected lcore 3 as core 3 on socket 0 00:04:58.234 EAL: Detected lcore 4 as core 4 on socket 0 00:04:58.234 EAL: Detected lcore 5 as core 5 on socket 0 00:04:58.234 EAL: Detected lcore 6 as core 6 on socket 0 00:04:58.234 EAL: Detected lcore 7 as core 9 on socket 0 00:04:58.234 EAL: Detected lcore 8 as core 10 on socket 0 00:04:58.234 EAL: Detected lcore 9 as core 11 on socket 0 00:04:58.234 EAL: Detected lcore 10 as core 12 on socket 0 00:04:58.234 EAL: Detected lcore 11 as core 13 on socket 0 00:04:58.234 EAL: Detected lcore 12 as core 16 on socket 0 00:04:58.234 EAL: Detected lcore 13 as core 17 on socket 0 00:04:58.234 EAL: Detected lcore 14 as core 18 on socket 0 00:04:58.234 EAL: Detected lcore 15 as core 19 on socket 0 00:04:58.234 EAL: Detected lcore 16 as core 20 on socket 0 00:04:58.234 EAL: Detected lcore 17 as core 21 on socket 0 00:04:58.234 EAL: Detected lcore 18 as core 24 on socket 0 00:04:58.234 EAL: Detected lcore 19 as core 25 on socket 0 00:04:58.234 EAL: Detected lcore 20 as core 26 on socket 0 00:04:58.234 EAL: Detected lcore 21 as core 27 on socket 0 00:04:58.234 EAL: Detected lcore 22 as core 28 on socket 0 00:04:58.234 EAL: Detected lcore 23 as core 29 on socket 0 00:04:58.234 EAL: Detected lcore 24 as core 0 on socket 1 00:04:58.234 EAL: Detected lcore 25 as core 1 on socket 1 00:04:58.234 EAL: Detected lcore 26 as core 2 on socket 1 00:04:58.234 EAL: Detected lcore 27 as core 3 on socket 1 00:04:58.234 EAL: Detected lcore 28 as core 4 on socket 1 00:04:58.234 EAL: Detected lcore 29 as core 5 on socket 1 00:04:58.234 EAL: Detected lcore 30 as core 6 on socket 1 00:04:58.234 EAL: Detected lcore 31 as core 8 on socket 1 00:04:58.234 EAL: Detected lcore 32 as core 9 on socket 1 00:04:58.234 EAL: Detected lcore 33 as core 10 on socket 1 00:04:58.234 EAL: Detected lcore 34 as core 11 on socket 1 00:04:58.234 EAL: Detected lcore 35 as core 12 on socket 1 00:04:58.234 EAL: Detected lcore 36 as core 13 on socket 1 00:04:58.234 EAL: Detected lcore 37 as core 16 on socket 1 00:04:58.234 EAL: Detected lcore 38 as core 17 on socket 1 00:04:58.234 EAL: Detected lcore 39 as core 18 on socket 1 00:04:58.234 EAL: Detected lcore 40 as core 19 on socket 1 00:04:58.234 EAL: Detected lcore 41 as core 20 on socket 1 00:04:58.234 EAL: Detected lcore 42 as core 21 on socket 1 00:04:58.234 EAL: Detected lcore 43 as core 25 on socket 1 00:04:58.234 EAL: Detected lcore 44 as core 26 on socket 1 00:04:58.234 EAL: Detected lcore 45 as core 27 on socket 1 00:04:58.234 EAL: Detected lcore 46 as core 28 on socket 1 00:04:58.234 EAL: Detected lcore 47 as core 29 on socket 1 00:04:58.234 EAL: Detected lcore 48 as core 0 on socket 0 00:04:58.234 EAL: Detected lcore 49 as core 1 on socket 0 00:04:58.234 EAL: Detected lcore 50 as core 2 on socket 0 00:04:58.234 EAL: Detected lcore 51 as core 3 on socket 0 00:04:58.234 EAL: Detected lcore 52 as core 4 on socket 0 00:04:58.234 EAL: Detected lcore 53 as core 5 on socket 0 00:04:58.234 EAL: Detected lcore 54 as core 6 on socket 0 00:04:58.234 EAL: Detected lcore 55 as core 9 on socket 0 00:04:58.234 EAL: Detected lcore 56 as core 10 on socket 0 00:04:58.234 EAL: Detected lcore 57 as core 11 on socket 0 00:04:58.234 EAL: Detected lcore 58 as core 12 on socket 0 00:04:58.234 EAL: Detected lcore 59 as core 13 on socket 0 00:04:58.234 EAL: Detected lcore 60 as core 16 on socket 0 00:04:58.234 EAL: Detected lcore 61 as core 17 on socket 0 00:04:58.234 EAL: Detected lcore 62 as core 18 on socket 0 00:04:58.234 EAL: Detected lcore 63 as core 19 on socket 0 00:04:58.234 EAL: Detected lcore 64 as core 20 on socket 0 00:04:58.234 EAL: Detected lcore 65 as core 21 on socket 0 00:04:58.234 EAL: Detected lcore 66 as core 24 on socket 0 00:04:58.234 EAL: Detected lcore 67 as core 25 on socket 0 00:04:58.234 EAL: Detected lcore 68 as core 26 on socket 0 00:04:58.234 EAL: Detected lcore 69 as core 27 on socket 0 00:04:58.234 EAL: Detected lcore 70 as core 28 on socket 0 00:04:58.234 EAL: Detected lcore 71 as core 29 on socket 0 00:04:58.234 EAL: Detected lcore 72 as core 0 on socket 1 00:04:58.234 EAL: Detected lcore 73 as core 1 on socket 1 00:04:58.234 EAL: Detected lcore 74 as core 2 on socket 1 00:04:58.234 EAL: Detected lcore 75 as core 3 on socket 1 00:04:58.234 EAL: Detected lcore 76 as core 4 on socket 1 00:04:58.234 EAL: Detected lcore 77 as core 5 on socket 1 00:04:58.234 EAL: Detected lcore 78 as core 6 on socket 1 00:04:58.234 EAL: Detected lcore 79 as core 8 on socket 1 00:04:58.234 EAL: Detected lcore 80 as core 9 on socket 1 00:04:58.234 EAL: Detected lcore 81 as core 10 on socket 1 00:04:58.234 EAL: Detected lcore 82 as core 11 on socket 1 00:04:58.234 EAL: Detected lcore 83 as core 12 on socket 1 00:04:58.234 EAL: Detected lcore 84 as core 13 on socket 1 00:04:58.234 EAL: Detected lcore 85 as core 16 on socket 1 00:04:58.234 EAL: Detected lcore 86 as core 17 on socket 1 00:04:58.234 EAL: Detected lcore 87 as core 18 on socket 1 00:04:58.234 EAL: Detected lcore 88 as core 19 on socket 1 00:04:58.234 EAL: Detected lcore 89 as core 20 on socket 1 00:04:58.234 EAL: Detected lcore 90 as core 21 on socket 1 00:04:58.234 EAL: Detected lcore 91 as core 25 on socket 1 00:04:58.234 EAL: Detected lcore 92 as core 26 on socket 1 00:04:58.234 EAL: Detected lcore 93 as core 27 on socket 1 00:04:58.234 EAL: Detected lcore 94 as core 28 on socket 1 00:04:58.234 EAL: Detected lcore 95 as core 29 on socket 1 00:04:58.582 EAL: Maximum logical cores by configuration: 128 00:04:58.582 EAL: Detected CPU lcores: 96 00:04:58.582 EAL: Detected NUMA nodes: 2 00:04:58.582 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:04:58.582 EAL: Detected shared linkage of DPDK 00:04:58.582 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:04:58.582 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:04:58.582 EAL: Registered [vdev] bus. 00:04:58.582 EAL: bus.vdev log level changed from disabled to notice 00:04:58.582 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:04:58.582 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:04:58.582 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:58.582 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:58.582 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:04:58.582 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:04:58.582 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:04:58.582 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:04:58.582 EAL: No shared files mode enabled, IPC will be disabled 00:04:58.582 EAL: No shared files mode enabled, IPC is disabled 00:04:58.582 EAL: Bus pci wants IOVA as 'DC' 00:04:58.582 EAL: Bus vdev wants IOVA as 'DC' 00:04:58.582 EAL: Buses did not request a specific IOVA mode. 00:04:58.582 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:58.582 EAL: Selected IOVA mode 'VA' 00:04:58.583 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.583 EAL: Probing VFIO support... 00:04:58.583 EAL: IOMMU type 1 (Type 1) is supported 00:04:58.583 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:58.583 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:58.583 EAL: VFIO support initialized 00:04:58.583 EAL: Ask a virtual area of 0x2e000 bytes 00:04:58.583 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:58.583 EAL: Setting up physically contiguous memory... 00:04:58.583 EAL: Setting maximum number of open files to 524288 00:04:58.583 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:58.583 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:58.583 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:58.583 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.583 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:58.583 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.583 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.583 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:58.583 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:58.583 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.583 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:58.583 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.583 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.583 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:58.583 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:58.583 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.583 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:58.583 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.583 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.583 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:58.583 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:58.583 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.583 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:58.583 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.583 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.583 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:58.583 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:58.583 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:58.583 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.583 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:58.583 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:58.583 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.583 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:58.583 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:58.583 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.583 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:58.583 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:58.583 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.583 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:58.583 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:58.583 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.583 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:58.583 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:58.583 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.583 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:58.583 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:58.583 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.583 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:58.583 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:58.583 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.583 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:58.583 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:58.583 EAL: Hugepages will be freed exactly as allocated. 00:04:58.583 EAL: No shared files mode enabled, IPC is disabled 00:04:58.583 EAL: No shared files mode enabled, IPC is disabled 00:04:58.583 EAL: TSC frequency is ~2100000 KHz 00:04:58.583 EAL: Main lcore 0 is ready (tid=7f292ef27a00;cpuset=[0]) 00:04:58.583 EAL: Trying to obtain current memory policy. 00:04:58.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.583 EAL: Restoring previous memory policy: 0 00:04:58.583 EAL: request: mp_malloc_sync 00:04:58.583 EAL: No shared files mode enabled, IPC is disabled 00:04:58.583 EAL: Heap on socket 0 was expanded by 2MB 00:04:58.583 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:04:58.583 EAL: probe driver: 8086:37d2 net_i40e 00:04:58.583 EAL: Not managed by a supported kernel driver, skipped 00:04:58.583 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:04:58.583 EAL: probe driver: 8086:37d2 net_i40e 00:04:58.583 EAL: Not managed by a supported kernel driver, skipped 00:04:58.583 EAL: No shared files mode enabled, IPC is disabled 00:04:58.583 EAL: No shared files mode enabled, IPC is disabled 00:04:58.583 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:58.583 EAL: Mem event callback 'spdk:(nil)' registered 00:04:58.583 00:04:58.583 00:04:58.583 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.583 http://cunit.sourceforge.net/ 00:04:58.583 00:04:58.583 00:04:58.583 Suite: components_suite 00:04:58.583 Test: vtophys_malloc_test ...passed 00:04:58.583 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:58.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.583 EAL: Restoring previous memory policy: 4 00:04:58.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.583 EAL: request: mp_malloc_sync 00:04:58.583 EAL: No shared files mode enabled, IPC is disabled 00:04:58.583 EAL: Heap on socket 0 was expanded by 4MB 00:04:58.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.583 EAL: request: mp_malloc_sync 00:04:58.583 EAL: No shared files mode enabled, IPC is disabled 00:04:58.583 EAL: Heap on socket 0 was shrunk by 4MB 00:04:58.583 EAL: Trying to obtain current memory policy. 00:04:58.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.583 EAL: Restoring previous memory policy: 4 00:04:58.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.583 EAL: request: mp_malloc_sync 00:04:58.583 EAL: No shared files mode enabled, IPC is disabled 00:04:58.583 EAL: Heap on socket 0 was expanded by 6MB 00:04:58.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.583 EAL: request: mp_malloc_sync 00:04:58.583 EAL: No shared files mode enabled, IPC is disabled 00:04:58.583 EAL: Heap on socket 0 was shrunk by 6MB 00:04:58.583 EAL: Trying to obtain current memory policy. 00:04:58.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.583 EAL: Restoring previous memory policy: 4 00:04:58.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.583 EAL: request: mp_malloc_sync 00:04:58.583 EAL: No shared files mode enabled, IPC is disabled 00:04:58.583 EAL: Heap on socket 0 was expanded by 10MB 00:04:58.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.583 EAL: request: mp_malloc_sync 00:04:58.583 EAL: No shared files mode enabled, IPC is disabled 00:04:58.583 EAL: Heap on socket 0 was shrunk by 10MB 00:04:58.583 EAL: Trying to obtain current memory policy. 00:04:58.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.583 EAL: Restoring previous memory policy: 4 00:04:58.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.583 EAL: request: mp_malloc_sync 00:04:58.583 EAL: No shared files mode enabled, IPC is disabled 00:04:58.583 EAL: Heap on socket 0 was expanded by 18MB 00:04:58.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.583 EAL: request: mp_malloc_sync 00:04:58.583 EAL: No shared files mode enabled, IPC is disabled 00:04:58.583 EAL: Heap on socket 0 was shrunk by 18MB 00:04:58.583 EAL: Trying to obtain current memory policy. 00:04:58.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.583 EAL: Restoring previous memory policy: 4 00:04:58.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.583 EAL: request: mp_malloc_sync 00:04:58.583 EAL: No shared files mode enabled, IPC is disabled 00:04:58.583 EAL: Heap on socket 0 was expanded by 34MB 00:04:58.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.583 EAL: request: mp_malloc_sync 00:04:58.583 EAL: No shared files mode enabled, IPC is disabled 00:04:58.583 EAL: Heap on socket 0 was shrunk by 34MB 00:04:58.583 EAL: Trying to obtain current memory policy. 00:04:58.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.583 EAL: Restoring previous memory policy: 4 00:04:58.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.583 EAL: request: mp_malloc_sync 00:04:58.583 EAL: No shared files mode enabled, IPC is disabled 00:04:58.583 EAL: Heap on socket 0 was expanded by 66MB 00:04:58.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.583 EAL: request: mp_malloc_sync 00:04:58.583 EAL: No shared files mode enabled, IPC is disabled 00:04:58.583 EAL: Heap on socket 0 was shrunk by 66MB 00:04:58.583 EAL: Trying to obtain current memory policy. 00:04:58.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.583 EAL: Restoring previous memory policy: 4 00:04:58.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.583 EAL: request: mp_malloc_sync 00:04:58.583 EAL: No shared files mode enabled, IPC is disabled 00:04:58.583 EAL: Heap on socket 0 was expanded by 130MB 00:04:58.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.583 EAL: request: mp_malloc_sync 00:04:58.583 EAL: No shared files mode enabled, IPC is disabled 00:04:58.583 EAL: Heap on socket 0 was shrunk by 130MB 00:04:58.583 EAL: Trying to obtain current memory policy. 00:04:58.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.583 EAL: Restoring previous memory policy: 4 00:04:58.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.583 EAL: request: mp_malloc_sync 00:04:58.583 EAL: No shared files mode enabled, IPC is disabled 00:04:58.583 EAL: Heap on socket 0 was expanded by 258MB 00:04:58.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.584 EAL: request: mp_malloc_sync 00:04:58.584 EAL: No shared files mode enabled, IPC is disabled 00:04:58.584 EAL: Heap on socket 0 was shrunk by 258MB 00:04:58.584 EAL: Trying to obtain current memory policy. 00:04:58.584 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.842 EAL: Restoring previous memory policy: 4 00:04:58.842 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.842 EAL: request: mp_malloc_sync 00:04:58.842 EAL: No shared files mode enabled, IPC is disabled 00:04:58.842 EAL: Heap on socket 0 was expanded by 514MB 00:04:58.842 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.842 EAL: request: mp_malloc_sync 00:04:58.842 EAL: No shared files mode enabled, IPC is disabled 00:04:58.842 EAL: Heap on socket 0 was shrunk by 514MB 00:04:58.842 EAL: Trying to obtain current memory policy. 00:04:58.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.101 EAL: Restoring previous memory policy: 4 00:04:59.101 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.101 EAL: request: mp_malloc_sync 00:04:59.101 EAL: No shared files mode enabled, IPC is disabled 00:04:59.101 EAL: Heap on socket 0 was expanded by 1026MB 00:04:59.359 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.359 EAL: request: mp_malloc_sync 00:04:59.359 EAL: No shared files mode enabled, IPC is disabled 00:04:59.359 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:59.359 passed 00:04:59.359 00:04:59.359 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.359 suites 1 1 n/a 0 0 00:04:59.359 tests 2 2 2 0 0 00:04:59.359 asserts 497 497 497 0 n/a 00:04:59.359 00:04:59.359 Elapsed time = 0.965 seconds 00:04:59.359 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.359 EAL: request: mp_malloc_sync 00:04:59.359 EAL: No shared files mode enabled, IPC is disabled 00:04:59.359 EAL: Heap on socket 0 was shrunk by 2MB 00:04:59.359 EAL: No shared files mode enabled, IPC is disabled 00:04:59.359 EAL: No shared files mode enabled, IPC is disabled 00:04:59.359 EAL: No shared files mode enabled, IPC is disabled 00:04:59.359 00:04:59.359 real 0m1.072s 00:04:59.359 user 0m0.626s 00:04:59.359 sys 0m0.412s 00:04:59.359 10:27:06 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.359 10:27:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:59.359 ************************************ 00:04:59.359 END TEST env_vtophys 00:04:59.359 ************************************ 00:04:59.359 10:27:06 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:59.359 10:27:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.359 10:27:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.359 10:27:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.359 ************************************ 00:04:59.359 START TEST env_pci 00:04:59.359 ************************************ 00:04:59.359 10:27:06 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:59.359 00:04:59.359 00:04:59.359 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.359 http://cunit.sourceforge.net/ 00:04:59.359 00:04:59.359 00:04:59.359 Suite: pci 00:04:59.359 Test: pci_hook ...[2024-07-24 10:27:06.810858] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2035968 has claimed it 00:04:59.618 EAL: Cannot find device (10000:00:01.0) 00:04:59.618 EAL: Failed to attach device on primary process 00:04:59.618 passed 00:04:59.618 00:04:59.618 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.618 suites 1 1 n/a 0 0 00:04:59.618 tests 1 1 1 0 0 00:04:59.618 asserts 25 25 25 0 n/a 00:04:59.618 00:04:59.618 Elapsed time = 0.029 seconds 00:04:59.618 00:04:59.618 real 0m0.046s 00:04:59.618 user 0m0.011s 00:04:59.618 sys 0m0.035s 00:04:59.618 10:27:06 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.618 10:27:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:59.618 ************************************ 00:04:59.618 END TEST env_pci 00:04:59.618 ************************************ 00:04:59.618 10:27:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:59.618 10:27:06 env -- env/env.sh@15 -- # uname 00:04:59.618 10:27:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:59.618 10:27:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:59.618 10:27:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:59.618 10:27:06 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:59.618 10:27:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.618 10:27:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.618 ************************************ 00:04:59.618 START TEST env_dpdk_post_init 00:04:59.618 ************************************ 00:04:59.618 10:27:06 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:59.618 EAL: Detected CPU lcores: 96 00:04:59.618 EAL: Detected NUMA nodes: 2 00:04:59.618 EAL: Detected shared linkage of DPDK 00:04:59.618 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:59.618 EAL: Selected IOVA mode 'VA' 00:04:59.618 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.618 EAL: VFIO support initialized 00:04:59.618 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:59.618 EAL: Using IOMMU type 1 (Type 1) 00:04:59.618 EAL: Ignore mapping IO port bar(1) 00:04:59.618 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:59.618 EAL: Ignore mapping IO port bar(1) 00:04:59.618 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:59.618 EAL: Ignore mapping IO port bar(1) 00:04:59.618 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:59.618 EAL: Ignore mapping IO port bar(1) 00:04:59.618 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:59.877 EAL: Ignore mapping IO port bar(1) 00:04:59.877 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:59.877 EAL: Ignore mapping IO port bar(1) 00:04:59.877 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:59.877 EAL: Ignore mapping IO port bar(1) 00:04:59.877 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:59.877 EAL: Ignore mapping IO port bar(1) 00:04:59.877 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:00.444 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5f:00.0 (socket 0) 00:05:00.444 EAL: Ignore mapping IO port bar(1) 00:05:00.444 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:00.444 EAL: Ignore mapping IO port bar(1) 00:05:00.444 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:00.444 EAL: Ignore mapping IO port bar(1) 00:05:00.444 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:00.444 EAL: Ignore mapping IO port bar(1) 00:05:00.444 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:00.702 EAL: Ignore mapping IO port bar(1) 00:05:00.702 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:00.702 EAL: Ignore mapping IO port bar(1) 00:05:00.702 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:00.702 EAL: Ignore mapping IO port bar(1) 00:05:00.702 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:00.702 EAL: Ignore mapping IO port bar(1) 00:05:00.702 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:04.886 EAL: Releasing PCI mapped resource for 0000:5f:00.0 00:05:04.886 EAL: Calling pci_unmap_resource for 0000:5f:00.0 at 0x202001020000 00:05:04.886 Starting DPDK initialization... 00:05:04.886 Starting SPDK post initialization... 00:05:04.886 SPDK NVMe probe 00:05:04.886 Attaching to 0000:5f:00.0 00:05:04.886 Attached to 0000:5f:00.0 00:05:04.886 Cleaning up... 00:05:04.886 00:05:04.886 real 0m4.900s 00:05:04.886 user 0m3.806s 00:05:04.886 sys 0m0.164s 00:05:04.886 10:27:11 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.886 10:27:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:04.886 ************************************ 00:05:04.886 END TEST env_dpdk_post_init 00:05:04.886 ************************************ 00:05:04.886 10:27:11 env -- env/env.sh@26 -- # uname 00:05:04.886 10:27:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:04.886 10:27:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:04.886 10:27:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.886 10:27:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.886 10:27:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.886 ************************************ 00:05:04.886 START TEST env_mem_callbacks 00:05:04.886 ************************************ 00:05:04.886 10:27:11 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:04.886 EAL: Detected CPU lcores: 96 00:05:04.886 EAL: Detected NUMA nodes: 2 00:05:04.886 EAL: Detected shared linkage of DPDK 00:05:04.886 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:04.886 EAL: Selected IOVA mode 'VA' 00:05:04.886 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.886 EAL: VFIO support initialized 00:05:04.886 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:04.886 00:05:04.886 00:05:04.886 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.886 http://cunit.sourceforge.net/ 00:05:04.886 00:05:04.886 00:05:04.886 Suite: memory 00:05:04.886 Test: test ... 00:05:04.886 register 0x200000200000 2097152 00:05:04.886 malloc 3145728 00:05:04.886 register 0x200000400000 4194304 00:05:04.886 buf 0x200000500000 len 3145728 PASSED 00:05:04.886 malloc 64 00:05:04.886 buf 0x2000004fff40 len 64 PASSED 00:05:04.886 malloc 4194304 00:05:04.886 register 0x200000800000 6291456 00:05:04.886 buf 0x200000a00000 len 4194304 PASSED 00:05:04.886 free 0x200000500000 3145728 00:05:04.886 free 0x2000004fff40 64 00:05:04.886 unregister 0x200000400000 4194304 PASSED 00:05:04.886 free 0x200000a00000 4194304 00:05:04.886 unregister 0x200000800000 6291456 PASSED 00:05:04.886 malloc 8388608 00:05:04.886 register 0x200000400000 10485760 00:05:04.886 buf 0x200000600000 len 8388608 PASSED 00:05:04.886 free 0x200000600000 8388608 00:05:04.886 unregister 0x200000400000 10485760 PASSED 00:05:04.886 passed 00:05:04.886 00:05:04.886 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.886 suites 1 1 n/a 0 0 00:05:04.886 tests 1 1 1 0 0 00:05:04.886 asserts 15 15 15 0 n/a 00:05:04.886 00:05:04.886 Elapsed time = 0.005 seconds 00:05:04.886 00:05:04.886 real 0m0.051s 00:05:04.886 user 0m0.014s 00:05:04.886 sys 0m0.037s 00:05:04.886 10:27:11 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.886 10:27:11 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:04.886 ************************************ 00:05:04.886 END TEST env_mem_callbacks 00:05:04.886 ************************************ 00:05:04.886 00:05:04.886 real 0m6.632s 00:05:04.886 user 0m4.771s 00:05:04.886 sys 0m0.923s 00:05:04.886 10:27:11 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.886 10:27:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.887 ************************************ 00:05:04.887 END TEST env 00:05:04.887 ************************************ 00:05:04.887 10:27:11 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:04.887 10:27:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.887 10:27:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.887 10:27:11 -- common/autotest_common.sh@10 -- # set +x 00:05:04.887 ************************************ 00:05:04.887 START TEST rpc 00:05:04.887 ************************************ 00:05:04.887 10:27:12 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:04.887 * Looking for test storage... 00:05:04.887 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:04.887 10:27:12 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2037371 00:05:04.887 10:27:12 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.887 10:27:12 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:04.887 10:27:12 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2037371 00:05:04.887 10:27:12 rpc -- common/autotest_common.sh@831 -- # '[' -z 2037371 ']' 00:05:04.887 10:27:12 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.887 10:27:12 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:04.887 10:27:12 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.887 10:27:12 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:04.887 10:27:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.887 [2024-07-24 10:27:12.164146] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:05:04.887 [2024-07-24 10:27:12.164194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2037371 ] 00:05:04.887 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.887 [2024-07-24 10:27:12.216722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.887 [2024-07-24 10:27:12.256844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:04.887 [2024-07-24 10:27:12.256885] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2037371' to capture a snapshot of events at runtime. 00:05:04.887 [2024-07-24 10:27:12.256893] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:04.887 [2024-07-24 10:27:12.256900] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:04.887 [2024-07-24 10:27:12.256905] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2037371 for offline analysis/debug. 00:05:04.887 [2024-07-24 10:27:12.256940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.146 10:27:12 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:05.146 10:27:12 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:05.146 10:27:12 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:05.146 10:27:12 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:05.146 10:27:12 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:05.146 10:27:12 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:05.146 10:27:12 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.146 10:27:12 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.146 10:27:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.146 ************************************ 00:05:05.146 START TEST rpc_integrity 00:05:05.146 ************************************ 00:05:05.146 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:05.146 10:27:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:05.146 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.146 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.146 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.146 10:27:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:05.146 10:27:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:05.146 10:27:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:05.146 10:27:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:05.146 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.146 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.146 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.146 10:27:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:05.146 10:27:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:05.146 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.146 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.146 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.146 10:27:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:05.146 { 00:05:05.146 "name": "Malloc0", 00:05:05.146 "aliases": [ 00:05:05.146 "4a977a34-a0a7-4bac-8464-4d5766a60bb3" 00:05:05.146 ], 00:05:05.146 "product_name": "Malloc disk", 00:05:05.146 "block_size": 512, 00:05:05.146 "num_blocks": 16384, 00:05:05.146 "uuid": "4a977a34-a0a7-4bac-8464-4d5766a60bb3", 00:05:05.146 "assigned_rate_limits": { 00:05:05.146 "rw_ios_per_sec": 0, 00:05:05.146 "rw_mbytes_per_sec": 0, 00:05:05.146 "r_mbytes_per_sec": 0, 00:05:05.146 "w_mbytes_per_sec": 0 00:05:05.146 }, 00:05:05.146 "claimed": false, 00:05:05.146 "zoned": false, 00:05:05.146 "supported_io_types": { 00:05:05.146 "read": true, 00:05:05.146 "write": true, 00:05:05.146 "unmap": true, 00:05:05.146 "flush": true, 00:05:05.146 "reset": true, 00:05:05.146 "nvme_admin": false, 00:05:05.146 "nvme_io": false, 00:05:05.146 "nvme_io_md": false, 00:05:05.146 "write_zeroes": true, 00:05:05.146 "zcopy": true, 00:05:05.146 "get_zone_info": false, 00:05:05.146 "zone_management": false, 00:05:05.146 "zone_append": false, 00:05:05.146 "compare": false, 00:05:05.146 "compare_and_write": false, 00:05:05.146 "abort": true, 00:05:05.146 "seek_hole": false, 00:05:05.146 "seek_data": false, 00:05:05.146 "copy": true, 00:05:05.146 "nvme_iov_md": false 00:05:05.146 }, 00:05:05.146 "memory_domains": [ 00:05:05.146 { 00:05:05.146 "dma_device_id": "system", 00:05:05.146 "dma_device_type": 1 00:05:05.146 }, 00:05:05.146 { 00:05:05.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.146 "dma_device_type": 2 00:05:05.146 } 00:05:05.146 ], 00:05:05.146 "driver_specific": {} 00:05:05.146 } 00:05:05.146 ]' 00:05:05.146 10:27:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:05.404 10:27:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:05.404 10:27:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:05.404 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.404 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.404 [2024-07-24 10:27:12.612398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:05.404 [2024-07-24 10:27:12.612428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:05.404 [2024-07-24 10:27:12.612441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a7f1b0 00:05:05.404 [2024-07-24 10:27:12.612447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:05.404 [2024-07-24 10:27:12.613450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:05.404 [2024-07-24 10:27:12.613472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:05.404 Passthru0 00:05:05.404 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.404 10:27:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:05.404 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.404 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.404 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.404 10:27:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:05.404 { 00:05:05.404 "name": "Malloc0", 00:05:05.404 "aliases": [ 00:05:05.404 "4a977a34-a0a7-4bac-8464-4d5766a60bb3" 00:05:05.404 ], 00:05:05.404 "product_name": "Malloc disk", 00:05:05.404 "block_size": 512, 00:05:05.404 "num_blocks": 16384, 00:05:05.404 "uuid": "4a977a34-a0a7-4bac-8464-4d5766a60bb3", 00:05:05.404 "assigned_rate_limits": { 00:05:05.404 "rw_ios_per_sec": 0, 00:05:05.404 "rw_mbytes_per_sec": 0, 00:05:05.404 "r_mbytes_per_sec": 0, 00:05:05.404 "w_mbytes_per_sec": 0 00:05:05.404 }, 00:05:05.404 "claimed": true, 00:05:05.404 "claim_type": "exclusive_write", 00:05:05.404 "zoned": false, 00:05:05.404 "supported_io_types": { 00:05:05.404 "read": true, 00:05:05.404 "write": true, 00:05:05.404 "unmap": true, 00:05:05.404 "flush": true, 00:05:05.404 "reset": true, 00:05:05.404 "nvme_admin": false, 00:05:05.404 "nvme_io": false, 00:05:05.404 "nvme_io_md": false, 00:05:05.404 "write_zeroes": true, 00:05:05.404 "zcopy": true, 00:05:05.404 "get_zone_info": false, 00:05:05.404 "zone_management": false, 00:05:05.404 "zone_append": false, 00:05:05.404 "compare": false, 00:05:05.404 "compare_and_write": false, 00:05:05.404 "abort": true, 00:05:05.404 "seek_hole": false, 00:05:05.404 "seek_data": false, 00:05:05.404 "copy": true, 00:05:05.404 "nvme_iov_md": false 00:05:05.404 }, 00:05:05.404 "memory_domains": [ 00:05:05.404 { 00:05:05.404 "dma_device_id": "system", 00:05:05.404 "dma_device_type": 1 00:05:05.404 }, 00:05:05.404 { 00:05:05.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.404 "dma_device_type": 2 00:05:05.404 } 00:05:05.404 ], 00:05:05.404 "driver_specific": {} 00:05:05.404 }, 00:05:05.404 { 00:05:05.404 "name": "Passthru0", 00:05:05.404 "aliases": [ 00:05:05.405 "00a3faaf-dd17-5755-ba77-afab4d39233f" 00:05:05.405 ], 00:05:05.405 "product_name": "passthru", 00:05:05.405 "block_size": 512, 00:05:05.405 "num_blocks": 16384, 00:05:05.405 "uuid": "00a3faaf-dd17-5755-ba77-afab4d39233f", 00:05:05.405 "assigned_rate_limits": { 00:05:05.405 "rw_ios_per_sec": 0, 00:05:05.405 "rw_mbytes_per_sec": 0, 00:05:05.405 "r_mbytes_per_sec": 0, 00:05:05.405 "w_mbytes_per_sec": 0 00:05:05.405 }, 00:05:05.405 "claimed": false, 00:05:05.405 "zoned": false, 00:05:05.405 "supported_io_types": { 00:05:05.405 "read": true, 00:05:05.405 "write": true, 00:05:05.405 "unmap": true, 00:05:05.405 "flush": true, 00:05:05.405 "reset": true, 00:05:05.405 "nvme_admin": false, 00:05:05.405 "nvme_io": false, 00:05:05.405 "nvme_io_md": false, 00:05:05.405 "write_zeroes": true, 00:05:05.405 "zcopy": true, 00:05:05.405 "get_zone_info": false, 00:05:05.405 "zone_management": false, 00:05:05.405 "zone_append": false, 00:05:05.405 "compare": false, 00:05:05.405 "compare_and_write": false, 00:05:05.405 "abort": true, 00:05:05.405 "seek_hole": false, 00:05:05.405 "seek_data": false, 00:05:05.405 "copy": true, 00:05:05.405 "nvme_iov_md": false 00:05:05.405 }, 00:05:05.405 "memory_domains": [ 00:05:05.405 { 00:05:05.405 "dma_device_id": "system", 00:05:05.405 "dma_device_type": 1 00:05:05.405 }, 00:05:05.405 { 00:05:05.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.405 "dma_device_type": 2 00:05:05.405 } 00:05:05.405 ], 00:05:05.405 "driver_specific": { 00:05:05.405 "passthru": { 00:05:05.405 "name": "Passthru0", 00:05:05.405 "base_bdev_name": "Malloc0" 00:05:05.405 } 00:05:05.405 } 00:05:05.405 } 00:05:05.405 ]' 00:05:05.405 10:27:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:05.405 10:27:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:05.405 10:27:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:05.405 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.405 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.405 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.405 10:27:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:05.405 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.405 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.405 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.405 10:27:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:05.405 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.405 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.405 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.405 10:27:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:05.405 10:27:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:05.405 10:27:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:05.405 00:05:05.405 real 0m0.274s 00:05:05.405 user 0m0.172s 00:05:05.405 sys 0m0.033s 00:05:05.405 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.405 10:27:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.405 ************************************ 00:05:05.405 END TEST rpc_integrity 00:05:05.405 ************************************ 00:05:05.405 10:27:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:05.405 10:27:12 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.405 10:27:12 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.405 10:27:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.405 ************************************ 00:05:05.405 START TEST rpc_plugins 00:05:05.405 ************************************ 00:05:05.405 10:27:12 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:05.405 10:27:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:05.405 10:27:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.405 10:27:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:05.405 10:27:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.405 10:27:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:05.405 10:27:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:05.405 10:27:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.405 10:27:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:05.405 10:27:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.405 10:27:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:05.405 { 00:05:05.405 "name": "Malloc1", 00:05:05.405 "aliases": [ 00:05:05.405 "0ce4f1d5-7568-42f1-94a1-138e6dfe3ed3" 00:05:05.405 ], 00:05:05.405 "product_name": "Malloc disk", 00:05:05.405 "block_size": 4096, 00:05:05.405 "num_blocks": 256, 00:05:05.405 "uuid": "0ce4f1d5-7568-42f1-94a1-138e6dfe3ed3", 00:05:05.405 "assigned_rate_limits": { 00:05:05.405 "rw_ios_per_sec": 0, 00:05:05.405 "rw_mbytes_per_sec": 0, 00:05:05.405 "r_mbytes_per_sec": 0, 00:05:05.405 "w_mbytes_per_sec": 0 00:05:05.405 }, 00:05:05.405 "claimed": false, 00:05:05.405 "zoned": false, 00:05:05.405 "supported_io_types": { 00:05:05.405 "read": true, 00:05:05.405 "write": true, 00:05:05.405 "unmap": true, 00:05:05.405 "flush": true, 00:05:05.405 "reset": true, 00:05:05.405 "nvme_admin": false, 00:05:05.405 "nvme_io": false, 00:05:05.405 "nvme_io_md": false, 00:05:05.405 "write_zeroes": true, 00:05:05.405 "zcopy": true, 00:05:05.405 "get_zone_info": false, 00:05:05.405 "zone_management": false, 00:05:05.405 "zone_append": false, 00:05:05.405 "compare": false, 00:05:05.405 "compare_and_write": false, 00:05:05.405 "abort": true, 00:05:05.405 "seek_hole": false, 00:05:05.405 "seek_data": false, 00:05:05.405 "copy": true, 00:05:05.405 "nvme_iov_md": false 00:05:05.405 }, 00:05:05.405 "memory_domains": [ 00:05:05.405 { 00:05:05.405 "dma_device_id": "system", 00:05:05.405 "dma_device_type": 1 00:05:05.405 }, 00:05:05.405 { 00:05:05.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.405 "dma_device_type": 2 00:05:05.405 } 00:05:05.405 ], 00:05:05.405 "driver_specific": {} 00:05:05.405 } 00:05:05.405 ]' 00:05:05.405 10:27:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:05.663 10:27:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:05.663 10:27:12 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:05.663 10:27:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.663 10:27:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:05.663 10:27:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.663 10:27:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:05.663 10:27:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.663 10:27:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:05.663 10:27:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.663 10:27:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:05.663 10:27:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:05.663 10:27:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:05.663 00:05:05.663 real 0m0.143s 00:05:05.663 user 0m0.084s 00:05:05.663 sys 0m0.025s 00:05:05.663 10:27:12 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.663 10:27:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:05.663 ************************************ 00:05:05.663 END TEST rpc_plugins 00:05:05.663 ************************************ 00:05:05.663 10:27:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:05.663 10:27:12 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.663 10:27:12 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.663 10:27:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.663 ************************************ 00:05:05.663 START TEST rpc_trace_cmd_test 00:05:05.663 ************************************ 00:05:05.663 10:27:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:05.663 10:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:05.663 10:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:05.663 10:27:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.663 10:27:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:05.663 10:27:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.663 10:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:05.663 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2037371", 00:05:05.663 "tpoint_group_mask": "0x8", 00:05:05.663 "iscsi_conn": { 00:05:05.663 "mask": "0x2", 00:05:05.663 "tpoint_mask": "0x0" 00:05:05.663 }, 00:05:05.663 "scsi": { 00:05:05.663 "mask": "0x4", 00:05:05.663 "tpoint_mask": "0x0" 00:05:05.663 }, 00:05:05.663 "bdev": { 00:05:05.663 "mask": "0x8", 00:05:05.663 "tpoint_mask": "0xffffffffffffffff" 00:05:05.663 }, 00:05:05.663 "nvmf_rdma": { 00:05:05.663 "mask": "0x10", 00:05:05.663 "tpoint_mask": "0x0" 00:05:05.663 }, 00:05:05.663 "nvmf_tcp": { 00:05:05.663 "mask": "0x20", 00:05:05.663 "tpoint_mask": "0x0" 00:05:05.663 }, 00:05:05.663 "ftl": { 00:05:05.664 "mask": "0x40", 00:05:05.664 "tpoint_mask": "0x0" 00:05:05.664 }, 00:05:05.664 "blobfs": { 00:05:05.664 "mask": "0x80", 00:05:05.664 "tpoint_mask": "0x0" 00:05:05.664 }, 00:05:05.664 "dsa": { 00:05:05.664 "mask": "0x200", 00:05:05.664 "tpoint_mask": "0x0" 00:05:05.664 }, 00:05:05.664 "thread": { 00:05:05.664 "mask": "0x400", 00:05:05.664 "tpoint_mask": "0x0" 00:05:05.664 }, 00:05:05.664 "nvme_pcie": { 00:05:05.664 "mask": "0x800", 00:05:05.664 "tpoint_mask": "0x0" 00:05:05.664 }, 00:05:05.664 "iaa": { 00:05:05.664 "mask": "0x1000", 00:05:05.664 "tpoint_mask": "0x0" 00:05:05.664 }, 00:05:05.664 "nvme_tcp": { 00:05:05.664 "mask": "0x2000", 00:05:05.664 "tpoint_mask": "0x0" 00:05:05.664 }, 00:05:05.664 "bdev_nvme": { 00:05:05.664 "mask": "0x4000", 00:05:05.664 "tpoint_mask": "0x0" 00:05:05.664 }, 00:05:05.664 "sock": { 00:05:05.664 "mask": "0x8000", 00:05:05.664 "tpoint_mask": "0x0" 00:05:05.664 } 00:05:05.664 }' 00:05:05.664 10:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:05.664 10:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:05.664 10:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:05.922 10:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:05.922 10:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:05.922 10:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:05.922 10:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:05.922 10:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:05.922 10:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:05.922 10:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:05.922 00:05:05.922 real 0m0.217s 00:05:05.922 user 0m0.186s 00:05:05.922 sys 0m0.022s 00:05:05.922 10:27:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.922 10:27:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:05.922 ************************************ 00:05:05.922 END TEST rpc_trace_cmd_test 00:05:05.922 ************************************ 00:05:05.922 10:27:13 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:05.922 10:27:13 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:05.922 10:27:13 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:05.922 10:27:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.922 10:27:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.922 10:27:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.922 ************************************ 00:05:05.922 START TEST rpc_daemon_integrity 00:05:05.922 ************************************ 00:05:05.922 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:05.922 10:27:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:05.922 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.922 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.922 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.922 10:27:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:05.922 10:27:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:05.922 10:27:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:05.922 10:27:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:05.922 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.922 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.922 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.922 10:27:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:06.180 10:27:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:06.180 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.180 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.180 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.180 10:27:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:06.180 { 00:05:06.180 "name": "Malloc2", 00:05:06.180 "aliases": [ 00:05:06.180 "64ff839f-9ecb-4b7b-b8d7-e7062f38821c" 00:05:06.180 ], 00:05:06.180 "product_name": "Malloc disk", 00:05:06.180 "block_size": 512, 00:05:06.180 "num_blocks": 16384, 00:05:06.180 "uuid": "64ff839f-9ecb-4b7b-b8d7-e7062f38821c", 00:05:06.180 "assigned_rate_limits": { 00:05:06.180 "rw_ios_per_sec": 0, 00:05:06.180 "rw_mbytes_per_sec": 0, 00:05:06.180 "r_mbytes_per_sec": 0, 00:05:06.180 "w_mbytes_per_sec": 0 00:05:06.180 }, 00:05:06.180 "claimed": false, 00:05:06.180 "zoned": false, 00:05:06.180 "supported_io_types": { 00:05:06.180 "read": true, 00:05:06.180 "write": true, 00:05:06.180 "unmap": true, 00:05:06.180 "flush": true, 00:05:06.180 "reset": true, 00:05:06.180 "nvme_admin": false, 00:05:06.180 "nvme_io": false, 00:05:06.180 "nvme_io_md": false, 00:05:06.180 "write_zeroes": true, 00:05:06.180 "zcopy": true, 00:05:06.180 "get_zone_info": false, 00:05:06.180 "zone_management": false, 00:05:06.180 "zone_append": false, 00:05:06.180 "compare": false, 00:05:06.180 "compare_and_write": false, 00:05:06.180 "abort": true, 00:05:06.180 "seek_hole": false, 00:05:06.180 "seek_data": false, 00:05:06.180 "copy": true, 00:05:06.180 "nvme_iov_md": false 00:05:06.180 }, 00:05:06.180 "memory_domains": [ 00:05:06.180 { 00:05:06.180 "dma_device_id": "system", 00:05:06.180 "dma_device_type": 1 00:05:06.180 }, 00:05:06.180 { 00:05:06.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.180 "dma_device_type": 2 00:05:06.180 } 00:05:06.180 ], 00:05:06.180 "driver_specific": {} 00:05:06.180 } 00:05:06.180 ]' 00:05:06.180 10:27:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:06.180 10:27:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:06.180 10:27:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:06.180 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.180 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.180 [2024-07-24 10:27:13.442666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:06.180 [2024-07-24 10:27:13.442694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:06.180 [2024-07-24 10:27:13.442710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18ceb20 00:05:06.180 [2024-07-24 10:27:13.442716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:06.180 [2024-07-24 10:27:13.443640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:06.180 [2024-07-24 10:27:13.443659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:06.180 Passthru0 00:05:06.180 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.180 10:27:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:06.180 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.181 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.181 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.181 10:27:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:06.181 { 00:05:06.181 "name": "Malloc2", 00:05:06.181 "aliases": [ 00:05:06.181 "64ff839f-9ecb-4b7b-b8d7-e7062f38821c" 00:05:06.181 ], 00:05:06.181 "product_name": "Malloc disk", 00:05:06.181 "block_size": 512, 00:05:06.181 "num_blocks": 16384, 00:05:06.181 "uuid": "64ff839f-9ecb-4b7b-b8d7-e7062f38821c", 00:05:06.181 "assigned_rate_limits": { 00:05:06.181 "rw_ios_per_sec": 0, 00:05:06.181 "rw_mbytes_per_sec": 0, 00:05:06.181 "r_mbytes_per_sec": 0, 00:05:06.181 "w_mbytes_per_sec": 0 00:05:06.181 }, 00:05:06.181 "claimed": true, 00:05:06.181 "claim_type": "exclusive_write", 00:05:06.181 "zoned": false, 00:05:06.181 "supported_io_types": { 00:05:06.181 "read": true, 00:05:06.181 "write": true, 00:05:06.181 "unmap": true, 00:05:06.181 "flush": true, 00:05:06.181 "reset": true, 00:05:06.181 "nvme_admin": false, 00:05:06.181 "nvme_io": false, 00:05:06.181 "nvme_io_md": false, 00:05:06.181 "write_zeroes": true, 00:05:06.181 "zcopy": true, 00:05:06.181 "get_zone_info": false, 00:05:06.181 "zone_management": false, 00:05:06.181 "zone_append": false, 00:05:06.181 "compare": false, 00:05:06.181 "compare_and_write": false, 00:05:06.181 "abort": true, 00:05:06.181 "seek_hole": false, 00:05:06.181 "seek_data": false, 00:05:06.181 "copy": true, 00:05:06.181 "nvme_iov_md": false 00:05:06.181 }, 00:05:06.181 "memory_domains": [ 00:05:06.181 { 00:05:06.181 "dma_device_id": "system", 00:05:06.181 "dma_device_type": 1 00:05:06.181 }, 00:05:06.181 { 00:05:06.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.181 "dma_device_type": 2 00:05:06.181 } 00:05:06.181 ], 00:05:06.181 "driver_specific": {} 00:05:06.181 }, 00:05:06.181 { 00:05:06.181 "name": "Passthru0", 00:05:06.181 "aliases": [ 00:05:06.181 "a31df848-0226-5045-958b-fd1ffc01684d" 00:05:06.181 ], 00:05:06.181 "product_name": "passthru", 00:05:06.181 "block_size": 512, 00:05:06.181 "num_blocks": 16384, 00:05:06.181 "uuid": "a31df848-0226-5045-958b-fd1ffc01684d", 00:05:06.181 "assigned_rate_limits": { 00:05:06.181 "rw_ios_per_sec": 0, 00:05:06.181 "rw_mbytes_per_sec": 0, 00:05:06.181 "r_mbytes_per_sec": 0, 00:05:06.181 "w_mbytes_per_sec": 0 00:05:06.181 }, 00:05:06.181 "claimed": false, 00:05:06.181 "zoned": false, 00:05:06.181 "supported_io_types": { 00:05:06.181 "read": true, 00:05:06.181 "write": true, 00:05:06.181 "unmap": true, 00:05:06.181 "flush": true, 00:05:06.181 "reset": true, 00:05:06.181 "nvme_admin": false, 00:05:06.181 "nvme_io": false, 00:05:06.181 "nvme_io_md": false, 00:05:06.181 "write_zeroes": true, 00:05:06.181 "zcopy": true, 00:05:06.181 "get_zone_info": false, 00:05:06.181 "zone_management": false, 00:05:06.181 "zone_append": false, 00:05:06.181 "compare": false, 00:05:06.181 "compare_and_write": false, 00:05:06.181 "abort": true, 00:05:06.181 "seek_hole": false, 00:05:06.181 "seek_data": false, 00:05:06.181 "copy": true, 00:05:06.181 "nvme_iov_md": false 00:05:06.181 }, 00:05:06.181 "memory_domains": [ 00:05:06.181 { 00:05:06.181 "dma_device_id": "system", 00:05:06.181 "dma_device_type": 1 00:05:06.181 }, 00:05:06.181 { 00:05:06.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.181 "dma_device_type": 2 00:05:06.181 } 00:05:06.181 ], 00:05:06.181 "driver_specific": { 00:05:06.181 "passthru": { 00:05:06.181 "name": "Passthru0", 00:05:06.181 "base_bdev_name": "Malloc2" 00:05:06.181 } 00:05:06.181 } 00:05:06.181 } 00:05:06.181 ]' 00:05:06.181 10:27:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:06.181 10:27:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:06.181 10:27:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:06.181 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.181 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.181 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.181 10:27:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:06.181 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.181 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.181 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.181 10:27:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:06.181 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.181 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.181 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.181 10:27:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:06.181 10:27:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:06.181 10:27:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:06.181 00:05:06.181 real 0m0.270s 00:05:06.181 user 0m0.165s 00:05:06.181 sys 0m0.045s 00:05:06.181 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.181 10:27:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.181 ************************************ 00:05:06.181 END TEST rpc_daemon_integrity 00:05:06.181 ************************************ 00:05:06.181 10:27:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:06.181 10:27:13 rpc -- rpc/rpc.sh@84 -- # killprocess 2037371 00:05:06.181 10:27:13 rpc -- common/autotest_common.sh@950 -- # '[' -z 2037371 ']' 00:05:06.181 10:27:13 rpc -- common/autotest_common.sh@954 -- # kill -0 2037371 00:05:06.181 10:27:13 rpc -- common/autotest_common.sh@955 -- # uname 00:05:06.181 10:27:13 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:06.181 10:27:13 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2037371 00:05:06.439 10:27:13 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:06.439 10:27:13 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:06.439 10:27:13 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2037371' 00:05:06.439 killing process with pid 2037371 00:05:06.439 10:27:13 rpc -- common/autotest_common.sh@969 -- # kill 2037371 00:05:06.439 10:27:13 rpc -- common/autotest_common.sh@974 -- # wait 2037371 00:05:06.698 00:05:06.698 real 0m1.919s 00:05:06.698 user 0m2.487s 00:05:06.698 sys 0m0.646s 00:05:06.698 10:27:13 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.698 10:27:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.698 ************************************ 00:05:06.698 END TEST rpc 00:05:06.698 ************************************ 00:05:06.698 10:27:13 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:06.698 10:27:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.698 10:27:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.698 10:27:13 -- common/autotest_common.sh@10 -- # set +x 00:05:06.698 ************************************ 00:05:06.698 START TEST skip_rpc 00:05:06.698 ************************************ 00:05:06.698 10:27:14 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:06.698 * Looking for test storage... 00:05:06.698 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:06.698 10:27:14 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:06.698 10:27:14 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:06.698 10:27:14 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:06.698 10:27:14 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.698 10:27:14 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.698 10:27:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.698 ************************************ 00:05:06.698 START TEST skip_rpc 00:05:06.698 ************************************ 00:05:06.698 10:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:06.698 10:27:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2037806 00:05:06.698 10:27:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.698 10:27:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:06.698 10:27:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:06.957 [2024-07-24 10:27:14.185936] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:05:06.957 [2024-07-24 10:27:14.185972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2037806 ] 00:05:06.957 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.957 [2024-07-24 10:27:14.238565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.957 [2024-07-24 10:27:14.278221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2037806 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2037806 ']' 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2037806 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2037806 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2037806' 00:05:12.223 killing process with pid 2037806 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2037806 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2037806 00:05:12.223 00:05:12.223 real 0m5.344s 00:05:12.223 user 0m5.113s 00:05:12.223 sys 0m0.253s 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.223 10:27:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.223 ************************************ 00:05:12.223 END TEST skip_rpc 00:05:12.223 ************************************ 00:05:12.223 10:27:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:12.223 10:27:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.223 10:27:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.223 10:27:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.223 ************************************ 00:05:12.223 START TEST skip_rpc_with_json 00:05:12.223 ************************************ 00:05:12.223 10:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:12.223 10:27:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:12.223 10:27:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2038747 00:05:12.223 10:27:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.223 10:27:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2038747 00:05:12.223 10:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2038747 ']' 00:05:12.223 10:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.223 10:27:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.223 10:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.223 10:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.223 10:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.223 10:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.223 [2024-07-24 10:27:19.583328] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:05:12.223 [2024-07-24 10:27:19.583364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2038747 ] 00:05:12.223 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.223 [2024-07-24 10:27:19.635369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.223 [2024-07-24 10:27:19.675916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.481 10:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.481 10:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:12.481 10:27:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:12.482 10:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.482 10:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.482 [2024-07-24 10:27:19.857507] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:12.482 request: 00:05:12.482 { 00:05:12.482 "trtype": "tcp", 00:05:12.482 "method": "nvmf_get_transports", 00:05:12.482 "req_id": 1 00:05:12.482 } 00:05:12.482 Got JSON-RPC error response 00:05:12.482 response: 00:05:12.482 { 00:05:12.482 "code": -19, 00:05:12.482 "message": "No such device" 00:05:12.482 } 00:05:12.482 10:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:12.482 10:27:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:12.482 10:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.482 10:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.482 [2024-07-24 10:27:19.865603] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.482 10:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.482 10:27:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:12.482 10:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.482 10:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.741 10:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.741 10:27:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:12.741 { 00:05:12.741 "subsystems": [ 00:05:12.741 { 00:05:12.741 "subsystem": "keyring", 00:05:12.741 "config": [] 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "subsystem": "iobuf", 00:05:12.741 "config": [ 00:05:12.741 { 00:05:12.741 "method": "iobuf_set_options", 00:05:12.741 "params": { 00:05:12.741 "small_pool_count": 8192, 00:05:12.741 "large_pool_count": 1024, 00:05:12.741 "small_bufsize": 8192, 00:05:12.741 "large_bufsize": 135168 00:05:12.741 } 00:05:12.741 } 00:05:12.741 ] 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "subsystem": "sock", 00:05:12.741 "config": [ 00:05:12.741 { 00:05:12.741 "method": "sock_set_default_impl", 00:05:12.741 "params": { 00:05:12.741 "impl_name": "posix" 00:05:12.741 } 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "method": "sock_impl_set_options", 00:05:12.741 "params": { 00:05:12.741 "impl_name": "ssl", 00:05:12.741 "recv_buf_size": 4096, 00:05:12.741 "send_buf_size": 4096, 00:05:12.741 "enable_recv_pipe": true, 00:05:12.741 "enable_quickack": false, 00:05:12.741 "enable_placement_id": 0, 00:05:12.741 "enable_zerocopy_send_server": true, 00:05:12.741 "enable_zerocopy_send_client": false, 00:05:12.741 "zerocopy_threshold": 0, 00:05:12.741 "tls_version": 0, 00:05:12.741 "enable_ktls": false 00:05:12.741 } 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "method": "sock_impl_set_options", 00:05:12.741 "params": { 00:05:12.741 "impl_name": "posix", 00:05:12.741 "recv_buf_size": 2097152, 00:05:12.741 "send_buf_size": 2097152, 00:05:12.741 "enable_recv_pipe": true, 00:05:12.741 "enable_quickack": false, 00:05:12.741 "enable_placement_id": 0, 00:05:12.741 "enable_zerocopy_send_server": true, 00:05:12.741 "enable_zerocopy_send_client": false, 00:05:12.741 "zerocopy_threshold": 0, 00:05:12.741 "tls_version": 0, 00:05:12.741 "enable_ktls": false 00:05:12.741 } 00:05:12.741 } 00:05:12.741 ] 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "subsystem": "vmd", 00:05:12.741 "config": [] 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "subsystem": "accel", 00:05:12.741 "config": [ 00:05:12.741 { 00:05:12.741 "method": "accel_set_options", 00:05:12.741 "params": { 00:05:12.741 "small_cache_size": 128, 00:05:12.741 "large_cache_size": 16, 00:05:12.741 "task_count": 2048, 00:05:12.741 "sequence_count": 2048, 00:05:12.741 "buf_count": 2048 00:05:12.741 } 00:05:12.741 } 00:05:12.741 ] 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "subsystem": "bdev", 00:05:12.741 "config": [ 00:05:12.741 { 00:05:12.741 "method": "bdev_set_options", 00:05:12.741 "params": { 00:05:12.741 "bdev_io_pool_size": 65535, 00:05:12.741 "bdev_io_cache_size": 256, 00:05:12.741 "bdev_auto_examine": true, 00:05:12.741 "iobuf_small_cache_size": 128, 00:05:12.741 "iobuf_large_cache_size": 16 00:05:12.741 } 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "method": "bdev_raid_set_options", 00:05:12.741 "params": { 00:05:12.741 "process_window_size_kb": 1024, 00:05:12.741 "process_max_bandwidth_mb_sec": 0 00:05:12.741 } 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "method": "bdev_iscsi_set_options", 00:05:12.741 "params": { 00:05:12.741 "timeout_sec": 30 00:05:12.741 } 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "method": "bdev_nvme_set_options", 00:05:12.741 "params": { 00:05:12.741 "action_on_timeout": "none", 00:05:12.741 "timeout_us": 0, 00:05:12.741 "timeout_admin_us": 0, 00:05:12.741 "keep_alive_timeout_ms": 10000, 00:05:12.741 "arbitration_burst": 0, 00:05:12.741 "low_priority_weight": 0, 00:05:12.741 "medium_priority_weight": 0, 00:05:12.741 "high_priority_weight": 0, 00:05:12.741 "nvme_adminq_poll_period_us": 10000, 00:05:12.741 "nvme_ioq_poll_period_us": 0, 00:05:12.741 "io_queue_requests": 0, 00:05:12.741 "delay_cmd_submit": true, 00:05:12.741 "transport_retry_count": 4, 00:05:12.741 "bdev_retry_count": 3, 00:05:12.741 "transport_ack_timeout": 0, 00:05:12.741 "ctrlr_loss_timeout_sec": 0, 00:05:12.741 "reconnect_delay_sec": 0, 00:05:12.741 "fast_io_fail_timeout_sec": 0, 00:05:12.741 "disable_auto_failback": false, 00:05:12.741 "generate_uuids": false, 00:05:12.741 "transport_tos": 0, 00:05:12.741 "nvme_error_stat": false, 00:05:12.741 "rdma_srq_size": 0, 00:05:12.741 "io_path_stat": false, 00:05:12.741 "allow_accel_sequence": false, 00:05:12.741 "rdma_max_cq_size": 0, 00:05:12.741 "rdma_cm_event_timeout_ms": 0, 00:05:12.741 "dhchap_digests": [ 00:05:12.741 "sha256", 00:05:12.741 "sha384", 00:05:12.741 "sha512" 00:05:12.741 ], 00:05:12.741 "dhchap_dhgroups": [ 00:05:12.741 "null", 00:05:12.741 "ffdhe2048", 00:05:12.741 "ffdhe3072", 00:05:12.741 "ffdhe4096", 00:05:12.741 "ffdhe6144", 00:05:12.741 "ffdhe8192" 00:05:12.741 ] 00:05:12.741 } 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "method": "bdev_nvme_set_hotplug", 00:05:12.741 "params": { 00:05:12.741 "period_us": 100000, 00:05:12.741 "enable": false 00:05:12.741 } 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "method": "bdev_wait_for_examine" 00:05:12.741 } 00:05:12.741 ] 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "subsystem": "scsi", 00:05:12.741 "config": null 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "subsystem": "scheduler", 00:05:12.741 "config": [ 00:05:12.741 { 00:05:12.741 "method": "framework_set_scheduler", 00:05:12.741 "params": { 00:05:12.741 "name": "static" 00:05:12.741 } 00:05:12.741 } 00:05:12.741 ] 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "subsystem": "vhost_scsi", 00:05:12.741 "config": [] 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "subsystem": "vhost_blk", 00:05:12.741 "config": [] 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "subsystem": "ublk", 00:05:12.741 "config": [] 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "subsystem": "nbd", 00:05:12.741 "config": [] 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "subsystem": "nvmf", 00:05:12.741 "config": [ 00:05:12.741 { 00:05:12.741 "method": "nvmf_set_config", 00:05:12.741 "params": { 00:05:12.741 "discovery_filter": "match_any", 00:05:12.741 "admin_cmd_passthru": { 00:05:12.741 "identify_ctrlr": false 00:05:12.741 } 00:05:12.741 } 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "method": "nvmf_set_max_subsystems", 00:05:12.741 "params": { 00:05:12.741 "max_subsystems": 1024 00:05:12.741 } 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "method": "nvmf_set_crdt", 00:05:12.741 "params": { 00:05:12.741 "crdt1": 0, 00:05:12.741 "crdt2": 0, 00:05:12.741 "crdt3": 0 00:05:12.741 } 00:05:12.741 }, 00:05:12.742 { 00:05:12.742 "method": "nvmf_create_transport", 00:05:12.742 "params": { 00:05:12.742 "trtype": "TCP", 00:05:12.742 "max_queue_depth": 128, 00:05:12.742 "max_io_qpairs_per_ctrlr": 127, 00:05:12.742 "in_capsule_data_size": 4096, 00:05:12.742 "max_io_size": 131072, 00:05:12.742 "io_unit_size": 131072, 00:05:12.742 "max_aq_depth": 128, 00:05:12.742 "num_shared_buffers": 511, 00:05:12.742 "buf_cache_size": 4294967295, 00:05:12.742 "dif_insert_or_strip": false, 00:05:12.742 "zcopy": false, 00:05:12.742 "c2h_success": true, 00:05:12.742 "sock_priority": 0, 00:05:12.742 "abort_timeout_sec": 1, 00:05:12.742 "ack_timeout": 0, 00:05:12.742 "data_wr_pool_size": 0 00:05:12.742 } 00:05:12.742 } 00:05:12.742 ] 00:05:12.742 }, 00:05:12.742 { 00:05:12.742 "subsystem": "iscsi", 00:05:12.742 "config": [ 00:05:12.742 { 00:05:12.742 "method": "iscsi_set_options", 00:05:12.742 "params": { 00:05:12.742 "node_base": "iqn.2016-06.io.spdk", 00:05:12.742 "max_sessions": 128, 00:05:12.742 "max_connections_per_session": 2, 00:05:12.742 "max_queue_depth": 64, 00:05:12.742 "default_time2wait": 2, 00:05:12.742 "default_time2retain": 20, 00:05:12.742 "first_burst_length": 8192, 00:05:12.742 "immediate_data": true, 00:05:12.742 "allow_duplicated_isid": false, 00:05:12.742 "error_recovery_level": 0, 00:05:12.742 "nop_timeout": 60, 00:05:12.742 "nop_in_interval": 30, 00:05:12.742 "disable_chap": false, 00:05:12.742 "require_chap": false, 00:05:12.742 "mutual_chap": false, 00:05:12.742 "chap_group": 0, 00:05:12.742 "max_large_datain_per_connection": 64, 00:05:12.742 "max_r2t_per_connection": 4, 00:05:12.742 "pdu_pool_size": 36864, 00:05:12.742 "immediate_data_pool_size": 16384, 00:05:12.742 "data_out_pool_size": 2048 00:05:12.742 } 00:05:12.742 } 00:05:12.742 ] 00:05:12.742 } 00:05:12.742 ] 00:05:12.742 } 00:05:12.742 10:27:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:12.742 10:27:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2038747 00:05:12.742 10:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2038747 ']' 00:05:12.742 10:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2038747 00:05:12.742 10:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:12.742 10:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:12.742 10:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2038747 00:05:12.742 10:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:12.742 10:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:12.742 10:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2038747' 00:05:12.742 killing process with pid 2038747 00:05:12.742 10:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2038747 00:05:12.742 10:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2038747 00:05:13.001 10:27:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2038935 00:05:13.001 10:27:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:13.001 10:27:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:18.267 10:27:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2038935 00:05:18.267 10:27:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2038935 ']' 00:05:18.267 10:27:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2038935 00:05:18.267 10:27:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:18.267 10:27:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:18.267 10:27:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2038935 00:05:18.267 10:27:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:18.267 10:27:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:18.267 10:27:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2038935' 00:05:18.267 killing process with pid 2038935 00:05:18.267 10:27:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2038935 00:05:18.267 10:27:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2038935 00:05:18.267 10:27:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:18.267 10:27:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:18.267 00:05:18.267 real 0m6.152s 00:05:18.267 user 0m5.858s 00:05:18.267 sys 0m0.527s 00:05:18.267 10:27:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.267 10:27:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.267 ************************************ 00:05:18.267 END TEST skip_rpc_with_json 00:05:18.267 ************************************ 00:05:18.526 10:27:25 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:18.526 10:27:25 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.526 10:27:25 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.526 10:27:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.526 ************************************ 00:05:18.526 START TEST skip_rpc_with_delay 00:05:18.526 ************************************ 00:05:18.526 10:27:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:18.526 10:27:25 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:18.526 10:27:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:18.526 10:27:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:18.526 10:27:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.526 10:27:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.526 10:27:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.526 10:27:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.526 10:27:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.526 10:27:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.526 10:27:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.526 10:27:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:18.526 10:27:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:18.526 [2024-07-24 10:27:25.817023] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:18.526 [2024-07-24 10:27:25.817091] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:18.526 10:27:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:18.526 10:27:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:18.526 10:27:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:18.526 10:27:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:18.526 00:05:18.526 real 0m0.064s 00:05:18.526 user 0m0.043s 00:05:18.526 sys 0m0.021s 00:05:18.526 10:27:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.526 10:27:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:18.526 ************************************ 00:05:18.526 END TEST skip_rpc_with_delay 00:05:18.526 ************************************ 00:05:18.526 10:27:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:18.526 10:27:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:18.526 10:27:25 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:18.526 10:27:25 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.526 10:27:25 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.526 10:27:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.526 ************************************ 00:05:18.526 START TEST exit_on_failed_rpc_init 00:05:18.526 ************************************ 00:05:18.526 10:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:18.526 10:27:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2039933 00:05:18.526 10:27:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2039933 00:05:18.526 10:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2039933 ']' 00:05:18.526 10:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.526 10:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.526 10:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.526 10:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.526 10:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:18.526 10:27:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.526 [2024-07-24 10:27:25.942978] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:05:18.526 [2024-07-24 10:27:25.943016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2039933 ] 00:05:18.526 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.786 [2024-07-24 10:27:25.997638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.786 [2024-07-24 10:27:26.038418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.786 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.786 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:18.786 10:27:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.786 10:27:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.786 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:18.786 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.786 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.786 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.786 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.786 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.786 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.786 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.786 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.786 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:18.786 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:19.045 [2024-07-24 10:27:26.268659] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:05:19.045 [2024-07-24 10:27:26.268703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2039963 ] 00:05:19.045 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.045 [2024-07-24 10:27:26.321907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.045 [2024-07-24 10:27:26.361351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.045 [2024-07-24 10:27:26.361417] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:19.045 [2024-07-24 10:27:26.361425] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:19.045 [2024-07-24 10:27:26.361431] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:19.045 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:19.045 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:19.045 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:19.045 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:19.045 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:19.045 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:19.045 10:27:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:19.045 10:27:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2039933 00:05:19.045 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2039933 ']' 00:05:19.045 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2039933 00:05:19.045 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:19.045 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:19.045 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2039933 00:05:19.045 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:19.045 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:19.045 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2039933' 00:05:19.045 killing process with pid 2039933 00:05:19.045 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2039933 00:05:19.045 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2039933 00:05:19.612 00:05:19.612 real 0m0.867s 00:05:19.612 user 0m0.905s 00:05:19.612 sys 0m0.366s 00:05:19.612 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.612 10:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:19.612 ************************************ 00:05:19.612 END TEST exit_on_failed_rpc_init 00:05:19.612 ************************************ 00:05:19.612 10:27:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:19.612 00:05:19.612 real 0m12.773s 00:05:19.612 user 0m12.039s 00:05:19.612 sys 0m1.414s 00:05:19.612 10:27:26 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.612 10:27:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.612 ************************************ 00:05:19.612 END TEST skip_rpc 00:05:19.612 ************************************ 00:05:19.612 10:27:26 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:19.612 10:27:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.612 10:27:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.612 10:27:26 -- common/autotest_common.sh@10 -- # set +x 00:05:19.612 ************************************ 00:05:19.612 START TEST rpc_client 00:05:19.612 ************************************ 00:05:19.612 10:27:26 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:19.612 * Looking for test storage... 00:05:19.612 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:19.612 10:27:26 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:19.612 OK 00:05:19.612 10:27:26 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:19.612 00:05:19.612 real 0m0.103s 00:05:19.612 user 0m0.055s 00:05:19.612 sys 0m0.057s 00:05:19.612 10:27:26 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.612 10:27:26 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:19.612 ************************************ 00:05:19.612 END TEST rpc_client 00:05:19.612 ************************************ 00:05:19.612 10:27:26 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:19.612 10:27:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.612 10:27:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.612 10:27:26 -- common/autotest_common.sh@10 -- # set +x 00:05:19.612 ************************************ 00:05:19.612 START TEST json_config 00:05:19.612 ************************************ 00:05:19.612 10:27:27 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:19.876 10:27:27 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:19.876 10:27:27 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.876 10:27:27 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.876 10:27:27 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.876 10:27:27 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.876 10:27:27 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.876 10:27:27 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.876 10:27:27 json_config -- paths/export.sh@5 -- # export PATH 00:05:19.876 10:27:27 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@47 -- # : 0 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:19.876 10:27:27 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:19.876 10:27:27 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:19.876 10:27:27 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:19.876 10:27:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:19.876 10:27:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:19.877 10:27:27 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:19.877 10:27:27 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:19.877 10:27:27 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:19.877 10:27:27 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:19.877 10:27:27 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:19.877 10:27:27 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:19.877 10:27:27 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:19.877 10:27:27 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:19.877 10:27:27 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:19.877 10:27:27 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:19.877 10:27:27 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:19.877 10:27:27 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:19.877 INFO: JSON configuration test init 00:05:19.877 10:27:27 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:19.877 10:27:27 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:19.877 10:27:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:19.877 10:27:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.877 10:27:27 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:19.877 10:27:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:19.877 10:27:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.877 10:27:27 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:19.877 10:27:27 json_config -- json_config/common.sh@9 -- # local app=target 00:05:19.877 10:27:27 json_config -- json_config/common.sh@10 -- # shift 00:05:19.877 10:27:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:19.877 10:27:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:19.877 10:27:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:19.877 10:27:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.877 10:27:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.877 10:27:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2040185 00:05:19.877 10:27:27 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:19.877 10:27:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:19.877 Waiting for target to run... 00:05:19.877 10:27:27 json_config -- json_config/common.sh@25 -- # waitforlisten 2040185 /var/tmp/spdk_tgt.sock 00:05:19.877 10:27:27 json_config -- common/autotest_common.sh@831 -- # '[' -z 2040185 ']' 00:05:19.877 10:27:27 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.877 10:27:27 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.877 10:27:27 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.877 10:27:27 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.877 10:27:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.877 [2024-07-24 10:27:27.154728] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:05:19.877 [2024-07-24 10:27:27.154781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2040185 ] 00:05:19.877 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.135 [2024-07-24 10:27:27.424956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.135 [2024-07-24 10:27:27.449634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.700 10:27:27 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.700 10:27:27 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:20.700 10:27:27 json_config -- json_config/common.sh@26 -- # echo '' 00:05:20.700 00:05:20.700 10:27:27 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:20.700 10:27:27 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:20.700 10:27:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:20.700 10:27:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.700 10:27:27 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:20.700 10:27:27 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:20.700 10:27:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:20.700 10:27:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.700 10:27:28 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:20.700 10:27:28 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:20.700 10:27:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:23.983 10:27:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:23.983 10:27:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:23.983 10:27:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@51 -- # sort 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:23.983 10:27:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:23.983 10:27:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:23.983 10:27:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:23.983 10:27:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@237 -- # [[ rdma == \r\d\m\a ]] 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@238 -- # TEST_TRANSPORT=rdma 00:05:23.983 10:27:31 json_config -- json_config/json_config.sh@238 -- # nvmftestinit 00:05:23.983 10:27:31 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:05:23.983 10:27:31 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:23.983 10:27:31 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:23.983 10:27:31 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:23.983 10:27:31 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:23.983 10:27:31 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:23.983 10:27:31 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:23.983 10:27:31 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:23.983 10:27:31 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:05:23.983 10:27:31 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:23.983 10:27:31 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:05:23.983 10:27:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@296 -- # e810=() 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@297 -- # x722=() 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@298 -- # mlx=() 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:05:29.303 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:05:29.303 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:05:29.303 Found net devices under 0000:da:00.0: mlx_0_0 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:05:29.303 Found net devices under 0000:da:00.1: mlx_0_1 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@58 -- # uname 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:05:29.303 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:29.303 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:05:29.303 altname enp218s0f0np0 00:05:29.303 altname ens818f0np0 00:05:29.303 inet 192.168.100.8/24 scope global mlx_0_0 00:05:29.303 valid_lft forever preferred_lft forever 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:05:29.303 10:27:36 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:05:29.303 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:29.303 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:05:29.303 altname enp218s0f1np1 00:05:29.304 altname ens818f1np1 00:05:29.304 inet 192.168.100.9/24 scope global mlx_0_1 00:05:29.304 valid_lft forever preferred_lft forever 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@422 -- # return 0 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:05:29.304 192.168.100.9' 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:05:29.304 192.168.100.9' 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@457 -- # head -n 1 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:05:29.304 192.168.100.9' 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@458 -- # head -n 1 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:05:29.304 10:27:36 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:05:29.304 10:27:36 json_config -- json_config/json_config.sh@241 -- # [[ -z 192.168.100.8 ]] 00:05:29.304 10:27:36 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:29.304 10:27:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:29.562 MallocForNvmf0 00:05:29.562 10:27:36 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:29.562 10:27:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:29.562 MallocForNvmf1 00:05:29.562 10:27:37 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:29.562 10:27:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:29.820 [2024-07-24 10:27:37.155694] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:29.820 [2024-07-24 10:27:37.183390] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xae5650/0x94fe80) succeed. 00:05:29.820 [2024-07-24 10:27:37.194639] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xae77f0/0x9cff00) succeed. 00:05:29.820 10:27:37 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:29.820 10:27:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:30.078 10:27:37 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:30.078 10:27:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:30.337 10:27:37 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:30.337 10:27:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:30.337 10:27:37 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:30.337 10:27:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:30.595 [2024-07-24 10:27:37.876553] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:30.595 10:27:37 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:30.595 10:27:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:30.595 10:27:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.595 10:27:37 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:30.595 10:27:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:30.595 10:27:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.595 10:27:37 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:30.595 10:27:37 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:30.595 10:27:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:30.852 MallocBdevForConfigChangeCheck 00:05:30.852 10:27:38 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:30.852 10:27:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:30.852 10:27:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.852 10:27:38 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:30.853 10:27:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:31.111 10:27:38 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:31.111 INFO: shutting down applications... 00:05:31.111 10:27:38 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:31.111 10:27:38 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:31.111 10:27:38 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:31.111 10:27:38 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:33.640 Calling clear_iscsi_subsystem 00:05:33.640 Calling clear_nvmf_subsystem 00:05:33.640 Calling clear_nbd_subsystem 00:05:33.640 Calling clear_ublk_subsystem 00:05:33.640 Calling clear_vhost_blk_subsystem 00:05:33.640 Calling clear_vhost_scsi_subsystem 00:05:33.640 Calling clear_bdev_subsystem 00:05:33.640 10:27:40 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:33.640 10:27:40 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:33.640 10:27:40 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:33.640 10:27:40 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.640 10:27:40 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:33.640 10:27:40 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:33.640 10:27:40 json_config -- json_config/json_config.sh@349 -- # break 00:05:33.640 10:27:40 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:33.640 10:27:40 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:33.640 10:27:40 json_config -- json_config/common.sh@31 -- # local app=target 00:05:33.640 10:27:40 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:33.640 10:27:40 json_config -- json_config/common.sh@35 -- # [[ -n 2040185 ]] 00:05:33.640 10:27:40 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2040185 00:05:33.640 10:27:40 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:33.640 10:27:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.640 10:27:40 json_config -- json_config/common.sh@41 -- # kill -0 2040185 00:05:33.640 10:27:40 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:34.207 10:27:41 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:34.207 10:27:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.207 10:27:41 json_config -- json_config/common.sh@41 -- # kill -0 2040185 00:05:34.207 10:27:41 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:34.207 10:27:41 json_config -- json_config/common.sh@43 -- # break 00:05:34.207 10:27:41 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:34.207 10:27:41 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:34.207 SPDK target shutdown done 00:05:34.207 10:27:41 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:34.207 INFO: relaunching applications... 00:05:34.207 10:27:41 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.207 10:27:41 json_config -- json_config/common.sh@9 -- # local app=target 00:05:34.207 10:27:41 json_config -- json_config/common.sh@10 -- # shift 00:05:34.207 10:27:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:34.207 10:27:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:34.207 10:27:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:34.207 10:27:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.207 10:27:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.207 10:27:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2044821 00:05:34.207 10:27:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:34.207 Waiting for target to run... 00:05:34.207 10:27:41 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.207 10:27:41 json_config -- json_config/common.sh@25 -- # waitforlisten 2044821 /var/tmp/spdk_tgt.sock 00:05:34.207 10:27:41 json_config -- common/autotest_common.sh@831 -- # '[' -z 2044821 ']' 00:05:34.207 10:27:41 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.207 10:27:41 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:34.207 10:27:41 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.207 10:27:41 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:34.207 10:27:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.207 [2024-07-24 10:27:41.517813] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:05:34.207 [2024-07-24 10:27:41.517871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2044821 ] 00:05:34.207 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.774 [2024-07-24 10:27:41.963171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.774 [2024-07-24 10:27:41.996698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.057 [2024-07-24 10:27:45.021457] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c6e280/0x1ad4580) succeed. 00:05:38.057 [2024-07-24 10:27:45.032493] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c6e400/0x1b54600) succeed. 00:05:38.057 [2024-07-24 10:27:45.082218] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:38.315 10:27:45 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.315 10:27:45 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:38.315 10:27:45 json_config -- json_config/common.sh@26 -- # echo '' 00:05:38.315 00:05:38.315 10:27:45 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:38.315 10:27:45 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:38.315 INFO: Checking if target configuration is the same... 00:05:38.316 10:27:45 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.316 10:27:45 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:38.316 10:27:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.316 + '[' 2 -ne 2 ']' 00:05:38.316 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:38.316 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:38.316 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:38.316 +++ basename /dev/fd/62 00:05:38.316 ++ mktemp /tmp/62.XXX 00:05:38.316 + tmp_file_1=/tmp/62.dk5 00:05:38.316 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.316 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:38.316 + tmp_file_2=/tmp/spdk_tgt_config.json.VBT 00:05:38.316 + ret=0 00:05:38.316 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.574 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.574 + diff -u /tmp/62.dk5 /tmp/spdk_tgt_config.json.VBT 00:05:38.574 + echo 'INFO: JSON config files are the same' 00:05:38.574 INFO: JSON config files are the same 00:05:38.574 + rm /tmp/62.dk5 /tmp/spdk_tgt_config.json.VBT 00:05:38.574 + exit 0 00:05:38.574 10:27:46 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:38.574 10:27:46 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:38.574 INFO: changing configuration and checking if this can be detected... 00:05:38.574 10:27:46 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.574 10:27:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.832 10:27:46 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:38.832 10:27:46 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.832 10:27:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.832 + '[' 2 -ne 2 ']' 00:05:38.832 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:38.832 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:38.832 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:38.832 +++ basename /dev/fd/62 00:05:38.832 ++ mktemp /tmp/62.XXX 00:05:38.832 + tmp_file_1=/tmp/62.Jul 00:05:38.832 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.832 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:38.832 + tmp_file_2=/tmp/spdk_tgt_config.json.PU8 00:05:38.832 + ret=0 00:05:38.832 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.090 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.090 + diff -u /tmp/62.Jul /tmp/spdk_tgt_config.json.PU8 00:05:39.090 + ret=1 00:05:39.090 + echo '=== Start of file: /tmp/62.Jul ===' 00:05:39.090 + cat /tmp/62.Jul 00:05:39.090 + echo '=== End of file: /tmp/62.Jul ===' 00:05:39.090 + echo '' 00:05:39.090 + echo '=== Start of file: /tmp/spdk_tgt_config.json.PU8 ===' 00:05:39.090 + cat /tmp/spdk_tgt_config.json.PU8 00:05:39.348 + echo '=== End of file: /tmp/spdk_tgt_config.json.PU8 ===' 00:05:39.348 + echo '' 00:05:39.348 + rm /tmp/62.Jul /tmp/spdk_tgt_config.json.PU8 00:05:39.348 + exit 1 00:05:39.348 10:27:46 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:39.348 INFO: configuration change detected. 00:05:39.348 10:27:46 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:39.348 10:27:46 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:39.348 10:27:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:39.348 10:27:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.348 10:27:46 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:39.348 10:27:46 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:39.348 10:27:46 json_config -- json_config/json_config.sh@321 -- # [[ -n 2044821 ]] 00:05:39.348 10:27:46 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:39.348 10:27:46 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:39.348 10:27:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:39.348 10:27:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.348 10:27:46 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:39.348 10:27:46 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:39.348 10:27:46 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:39.348 10:27:46 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:39.348 10:27:46 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:39.348 10:27:46 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:39.348 10:27:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:39.348 10:27:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.348 10:27:46 json_config -- json_config/json_config.sh@327 -- # killprocess 2044821 00:05:39.348 10:27:46 json_config -- common/autotest_common.sh@950 -- # '[' -z 2044821 ']' 00:05:39.348 10:27:46 json_config -- common/autotest_common.sh@954 -- # kill -0 2044821 00:05:39.349 10:27:46 json_config -- common/autotest_common.sh@955 -- # uname 00:05:39.349 10:27:46 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.349 10:27:46 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2044821 00:05:39.349 10:27:46 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.349 10:27:46 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.349 10:27:46 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2044821' 00:05:39.349 killing process with pid 2044821 00:05:39.349 10:27:46 json_config -- common/autotest_common.sh@969 -- # kill 2044821 00:05:39.349 10:27:46 json_config -- common/autotest_common.sh@974 -- # wait 2044821 00:05:41.876 10:27:48 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.876 10:27:48 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:41.876 10:27:48 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:41.876 10:27:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.876 10:27:48 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:41.876 10:27:48 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:41.876 INFO: Success 00:05:41.876 10:27:48 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:41.876 10:27:48 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:41.876 10:27:48 json_config -- nvmf/common.sh@117 -- # sync 00:05:41.876 10:27:48 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:05:41.876 10:27:48 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:05:41.876 10:27:48 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:05:41.876 10:27:48 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:41.876 10:27:48 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:05:41.876 00:05:41.876 real 0m21.812s 00:05:41.876 user 0m23.973s 00:05:41.876 sys 0m5.913s 00:05:41.876 10:27:48 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.876 10:27:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.876 ************************************ 00:05:41.876 END TEST json_config 00:05:41.876 ************************************ 00:05:41.876 10:27:48 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:41.876 10:27:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.876 10:27:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.876 10:27:48 -- common/autotest_common.sh@10 -- # set +x 00:05:41.876 ************************************ 00:05:41.876 START TEST json_config_extra_key 00:05:41.876 ************************************ 00:05:41.876 10:27:48 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:41.876 10:27:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:41.876 10:27:48 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.876 10:27:48 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.876 10:27:48 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.876 10:27:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.876 10:27:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.876 10:27:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.876 10:27:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:41.876 10:27:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:41.876 10:27:48 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:41.876 10:27:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:41.876 10:27:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:41.876 10:27:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:41.876 10:27:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:41.876 10:27:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:41.876 10:27:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:41.876 10:27:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:41.877 10:27:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:41.877 10:27:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:41.877 10:27:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.877 10:27:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:41.877 INFO: launching applications... 00:05:41.877 10:27:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:41.877 10:27:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:41.877 10:27:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:41.877 10:27:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:41.877 10:27:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:41.877 10:27:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:41.877 10:27:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.877 10:27:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.877 10:27:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2046142 00:05:41.877 10:27:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:41.877 Waiting for target to run... 00:05:41.877 10:27:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2046142 /var/tmp/spdk_tgt.sock 00:05:41.877 10:27:48 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2046142 ']' 00:05:41.877 10:27:48 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.877 10:27:48 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:41.877 10:27:48 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.877 10:27:48 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.877 10:27:48 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.877 10:27:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:41.877 [2024-07-24 10:27:49.045225] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:05:41.877 [2024-07-24 10:27:49.045276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046142 ] 00:05:41.877 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.877 [2024-07-24 10:27:49.309262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.135 [2024-07-24 10:27:49.334813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.393 10:27:49 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.393 10:27:49 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:42.393 10:27:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:42.393 00:05:42.393 10:27:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:42.393 INFO: shutting down applications... 00:05:42.393 10:27:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:42.393 10:27:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:42.393 10:27:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:42.393 10:27:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2046142 ]] 00:05:42.393 10:27:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2046142 00:05:42.393 10:27:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:42.393 10:27:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.393 10:27:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2046142 00:05:42.393 10:27:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:42.961 10:27:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:42.961 10:27:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.961 10:27:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2046142 00:05:42.961 10:27:50 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:42.961 10:27:50 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:42.961 10:27:50 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:42.961 10:27:50 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:42.961 SPDK target shutdown done 00:05:42.961 10:27:50 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:42.961 Success 00:05:42.961 00:05:42.961 real 0m1.446s 00:05:42.961 user 0m1.226s 00:05:42.961 sys 0m0.367s 00:05:42.961 10:27:50 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.961 10:27:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:42.961 ************************************ 00:05:42.961 END TEST json_config_extra_key 00:05:42.961 ************************************ 00:05:42.961 10:27:50 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:42.961 10:27:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.961 10:27:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.961 10:27:50 -- common/autotest_common.sh@10 -- # set +x 00:05:43.219 ************************************ 00:05:43.219 START TEST alias_rpc 00:05:43.219 ************************************ 00:05:43.219 10:27:50 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:43.219 * Looking for test storage... 00:05:43.219 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:43.219 10:27:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:43.219 10:27:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2046504 00:05:43.219 10:27:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.219 10:27:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2046504 00:05:43.219 10:27:50 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2046504 ']' 00:05:43.219 10:27:50 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.219 10:27:50 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.219 10:27:50 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.219 10:27:50 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.219 10:27:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.219 [2024-07-24 10:27:50.552556] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:05:43.219 [2024-07-24 10:27:50.552614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046504 ] 00:05:43.219 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.219 [2024-07-24 10:27:50.606743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.219 [2024-07-24 10:27:50.648157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.476 10:27:50 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.476 10:27:50 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:43.476 10:27:50 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:43.734 10:27:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2046504 00:05:43.734 10:27:51 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2046504 ']' 00:05:43.734 10:27:51 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2046504 00:05:43.734 10:27:51 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:43.734 10:27:51 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.734 10:27:51 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2046504 00:05:43.734 10:27:51 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.734 10:27:51 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.734 10:27:51 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2046504' 00:05:43.734 killing process with pid 2046504 00:05:43.734 10:27:51 alias_rpc -- common/autotest_common.sh@969 -- # kill 2046504 00:05:43.734 10:27:51 alias_rpc -- common/autotest_common.sh@974 -- # wait 2046504 00:05:43.992 00:05:43.993 real 0m0.955s 00:05:43.993 user 0m0.970s 00:05:43.993 sys 0m0.363s 00:05:43.993 10:27:51 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.993 10:27:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.993 ************************************ 00:05:43.993 END TEST alias_rpc 00:05:43.993 ************************************ 00:05:43.993 10:27:51 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:43.993 10:27:51 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:43.993 10:27:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.993 10:27:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.993 10:27:51 -- common/autotest_common.sh@10 -- # set +x 00:05:43.993 ************************************ 00:05:43.993 START TEST spdkcli_tcp 00:05:43.993 ************************************ 00:05:43.993 10:27:51 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:44.251 * Looking for test storage... 00:05:44.251 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:44.252 10:27:51 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:44.252 10:27:51 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:44.252 10:27:51 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:44.252 10:27:51 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:44.252 10:27:51 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:44.252 10:27:51 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:44.252 10:27:51 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:44.252 10:27:51 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:44.252 10:27:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.252 10:27:51 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2046661 00:05:44.252 10:27:51 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2046661 00:05:44.252 10:27:51 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:44.252 10:27:51 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2046661 ']' 00:05:44.252 10:27:51 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.252 10:27:51 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.252 10:27:51 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.252 10:27:51 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.252 10:27:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.252 [2024-07-24 10:27:51.583322] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:05:44.252 [2024-07-24 10:27:51.583366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046661 ] 00:05:44.252 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.252 [2024-07-24 10:27:51.637052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.252 [2024-07-24 10:27:51.678704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.252 [2024-07-24 10:27:51.678707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.510 10:27:51 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.510 10:27:51 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:44.510 10:27:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:44.510 10:27:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2046783 00:05:44.510 10:27:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:44.769 [ 00:05:44.769 "bdev_malloc_delete", 00:05:44.769 "bdev_malloc_create", 00:05:44.769 "bdev_null_resize", 00:05:44.769 "bdev_null_delete", 00:05:44.769 "bdev_null_create", 00:05:44.769 "bdev_nvme_cuse_unregister", 00:05:44.769 "bdev_nvme_cuse_register", 00:05:44.769 "bdev_opal_new_user", 00:05:44.769 "bdev_opal_set_lock_state", 00:05:44.769 "bdev_opal_delete", 00:05:44.769 "bdev_opal_get_info", 00:05:44.769 "bdev_opal_create", 00:05:44.769 "bdev_nvme_opal_revert", 00:05:44.769 "bdev_nvme_opal_init", 00:05:44.769 "bdev_nvme_send_cmd", 00:05:44.769 "bdev_nvme_get_path_iostat", 00:05:44.769 "bdev_nvme_get_mdns_discovery_info", 00:05:44.769 "bdev_nvme_stop_mdns_discovery", 00:05:44.769 "bdev_nvme_start_mdns_discovery", 00:05:44.769 "bdev_nvme_set_multipath_policy", 00:05:44.769 "bdev_nvme_set_preferred_path", 00:05:44.769 "bdev_nvme_get_io_paths", 00:05:44.769 "bdev_nvme_remove_error_injection", 00:05:44.769 "bdev_nvme_add_error_injection", 00:05:44.769 "bdev_nvme_get_discovery_info", 00:05:44.769 "bdev_nvme_stop_discovery", 00:05:44.769 "bdev_nvme_start_discovery", 00:05:44.769 "bdev_nvme_get_controller_health_info", 00:05:44.769 "bdev_nvme_disable_controller", 00:05:44.769 "bdev_nvme_enable_controller", 00:05:44.769 "bdev_nvme_reset_controller", 00:05:44.769 "bdev_nvme_get_transport_statistics", 00:05:44.769 "bdev_nvme_apply_firmware", 00:05:44.769 "bdev_nvme_detach_controller", 00:05:44.769 "bdev_nvme_get_controllers", 00:05:44.769 "bdev_nvme_attach_controller", 00:05:44.769 "bdev_nvme_set_hotplug", 00:05:44.769 "bdev_nvme_set_options", 00:05:44.769 "bdev_passthru_delete", 00:05:44.769 "bdev_passthru_create", 00:05:44.769 "bdev_lvol_set_parent_bdev", 00:05:44.769 "bdev_lvol_set_parent", 00:05:44.769 "bdev_lvol_check_shallow_copy", 00:05:44.769 "bdev_lvol_start_shallow_copy", 00:05:44.769 "bdev_lvol_grow_lvstore", 00:05:44.769 "bdev_lvol_get_lvols", 00:05:44.769 "bdev_lvol_get_lvstores", 00:05:44.769 "bdev_lvol_delete", 00:05:44.769 "bdev_lvol_set_read_only", 00:05:44.769 "bdev_lvol_resize", 00:05:44.769 "bdev_lvol_decouple_parent", 00:05:44.769 "bdev_lvol_inflate", 00:05:44.769 "bdev_lvol_rename", 00:05:44.769 "bdev_lvol_clone_bdev", 00:05:44.769 "bdev_lvol_clone", 00:05:44.769 "bdev_lvol_snapshot", 00:05:44.769 "bdev_lvol_create", 00:05:44.769 "bdev_lvol_delete_lvstore", 00:05:44.769 "bdev_lvol_rename_lvstore", 00:05:44.769 "bdev_lvol_create_lvstore", 00:05:44.769 "bdev_raid_set_options", 00:05:44.769 "bdev_raid_remove_base_bdev", 00:05:44.769 "bdev_raid_add_base_bdev", 00:05:44.769 "bdev_raid_delete", 00:05:44.769 "bdev_raid_create", 00:05:44.769 "bdev_raid_get_bdevs", 00:05:44.769 "bdev_error_inject_error", 00:05:44.769 "bdev_error_delete", 00:05:44.769 "bdev_error_create", 00:05:44.769 "bdev_split_delete", 00:05:44.769 "bdev_split_create", 00:05:44.769 "bdev_delay_delete", 00:05:44.769 "bdev_delay_create", 00:05:44.769 "bdev_delay_update_latency", 00:05:44.769 "bdev_zone_block_delete", 00:05:44.769 "bdev_zone_block_create", 00:05:44.769 "blobfs_create", 00:05:44.769 "blobfs_detect", 00:05:44.769 "blobfs_set_cache_size", 00:05:44.769 "bdev_aio_delete", 00:05:44.769 "bdev_aio_rescan", 00:05:44.769 "bdev_aio_create", 00:05:44.769 "bdev_ftl_set_property", 00:05:44.769 "bdev_ftl_get_properties", 00:05:44.769 "bdev_ftl_get_stats", 00:05:44.769 "bdev_ftl_unmap", 00:05:44.769 "bdev_ftl_unload", 00:05:44.769 "bdev_ftl_delete", 00:05:44.769 "bdev_ftl_load", 00:05:44.769 "bdev_ftl_create", 00:05:44.769 "bdev_virtio_attach_controller", 00:05:44.769 "bdev_virtio_scsi_get_devices", 00:05:44.769 "bdev_virtio_detach_controller", 00:05:44.769 "bdev_virtio_blk_set_hotplug", 00:05:44.769 "bdev_iscsi_delete", 00:05:44.769 "bdev_iscsi_create", 00:05:44.769 "bdev_iscsi_set_options", 00:05:44.769 "accel_error_inject_error", 00:05:44.769 "ioat_scan_accel_module", 00:05:44.769 "dsa_scan_accel_module", 00:05:44.769 "iaa_scan_accel_module", 00:05:44.769 "keyring_file_remove_key", 00:05:44.769 "keyring_file_add_key", 00:05:44.769 "keyring_linux_set_options", 00:05:44.769 "iscsi_get_histogram", 00:05:44.769 "iscsi_enable_histogram", 00:05:44.769 "iscsi_set_options", 00:05:44.769 "iscsi_get_auth_groups", 00:05:44.769 "iscsi_auth_group_remove_secret", 00:05:44.769 "iscsi_auth_group_add_secret", 00:05:44.769 "iscsi_delete_auth_group", 00:05:44.769 "iscsi_create_auth_group", 00:05:44.769 "iscsi_set_discovery_auth", 00:05:44.769 "iscsi_get_options", 00:05:44.769 "iscsi_target_node_request_logout", 00:05:44.769 "iscsi_target_node_set_redirect", 00:05:44.769 "iscsi_target_node_set_auth", 00:05:44.769 "iscsi_target_node_add_lun", 00:05:44.769 "iscsi_get_stats", 00:05:44.769 "iscsi_get_connections", 00:05:44.769 "iscsi_portal_group_set_auth", 00:05:44.769 "iscsi_start_portal_group", 00:05:44.769 "iscsi_delete_portal_group", 00:05:44.769 "iscsi_create_portal_group", 00:05:44.769 "iscsi_get_portal_groups", 00:05:44.769 "iscsi_delete_target_node", 00:05:44.769 "iscsi_target_node_remove_pg_ig_maps", 00:05:44.769 "iscsi_target_node_add_pg_ig_maps", 00:05:44.769 "iscsi_create_target_node", 00:05:44.769 "iscsi_get_target_nodes", 00:05:44.769 "iscsi_delete_initiator_group", 00:05:44.769 "iscsi_initiator_group_remove_initiators", 00:05:44.769 "iscsi_initiator_group_add_initiators", 00:05:44.769 "iscsi_create_initiator_group", 00:05:44.769 "iscsi_get_initiator_groups", 00:05:44.769 "nvmf_set_crdt", 00:05:44.769 "nvmf_set_config", 00:05:44.769 "nvmf_set_max_subsystems", 00:05:44.769 "nvmf_stop_mdns_prr", 00:05:44.769 "nvmf_publish_mdns_prr", 00:05:44.769 "nvmf_subsystem_get_listeners", 00:05:44.769 "nvmf_subsystem_get_qpairs", 00:05:44.769 "nvmf_subsystem_get_controllers", 00:05:44.769 "nvmf_get_stats", 00:05:44.769 "nvmf_get_transports", 00:05:44.769 "nvmf_create_transport", 00:05:44.769 "nvmf_get_targets", 00:05:44.769 "nvmf_delete_target", 00:05:44.769 "nvmf_create_target", 00:05:44.769 "nvmf_subsystem_allow_any_host", 00:05:44.769 "nvmf_subsystem_remove_host", 00:05:44.769 "nvmf_subsystem_add_host", 00:05:44.769 "nvmf_ns_remove_host", 00:05:44.769 "nvmf_ns_add_host", 00:05:44.769 "nvmf_subsystem_remove_ns", 00:05:44.769 "nvmf_subsystem_add_ns", 00:05:44.769 "nvmf_subsystem_listener_set_ana_state", 00:05:44.769 "nvmf_discovery_get_referrals", 00:05:44.769 "nvmf_discovery_remove_referral", 00:05:44.769 "nvmf_discovery_add_referral", 00:05:44.769 "nvmf_subsystem_remove_listener", 00:05:44.769 "nvmf_subsystem_add_listener", 00:05:44.769 "nvmf_delete_subsystem", 00:05:44.769 "nvmf_create_subsystem", 00:05:44.769 "nvmf_get_subsystems", 00:05:44.769 "env_dpdk_get_mem_stats", 00:05:44.769 "nbd_get_disks", 00:05:44.769 "nbd_stop_disk", 00:05:44.769 "nbd_start_disk", 00:05:44.769 "ublk_recover_disk", 00:05:44.769 "ublk_get_disks", 00:05:44.769 "ublk_stop_disk", 00:05:44.769 "ublk_start_disk", 00:05:44.769 "ublk_destroy_target", 00:05:44.769 "ublk_create_target", 00:05:44.769 "virtio_blk_create_transport", 00:05:44.769 "virtio_blk_get_transports", 00:05:44.769 "vhost_controller_set_coalescing", 00:05:44.769 "vhost_get_controllers", 00:05:44.770 "vhost_delete_controller", 00:05:44.770 "vhost_create_blk_controller", 00:05:44.770 "vhost_scsi_controller_remove_target", 00:05:44.770 "vhost_scsi_controller_add_target", 00:05:44.770 "vhost_start_scsi_controller", 00:05:44.770 "vhost_create_scsi_controller", 00:05:44.770 "thread_set_cpumask", 00:05:44.770 "framework_get_governor", 00:05:44.770 "framework_get_scheduler", 00:05:44.770 "framework_set_scheduler", 00:05:44.770 "framework_get_reactors", 00:05:44.770 "thread_get_io_channels", 00:05:44.770 "thread_get_pollers", 00:05:44.770 "thread_get_stats", 00:05:44.770 "framework_monitor_context_switch", 00:05:44.770 "spdk_kill_instance", 00:05:44.770 "log_enable_timestamps", 00:05:44.770 "log_get_flags", 00:05:44.770 "log_clear_flag", 00:05:44.770 "log_set_flag", 00:05:44.770 "log_get_level", 00:05:44.770 "log_set_level", 00:05:44.770 "log_get_print_level", 00:05:44.770 "log_set_print_level", 00:05:44.770 "framework_enable_cpumask_locks", 00:05:44.770 "framework_disable_cpumask_locks", 00:05:44.770 "framework_wait_init", 00:05:44.770 "framework_start_init", 00:05:44.770 "scsi_get_devices", 00:05:44.770 "bdev_get_histogram", 00:05:44.770 "bdev_enable_histogram", 00:05:44.770 "bdev_set_qos_limit", 00:05:44.770 "bdev_set_qd_sampling_period", 00:05:44.770 "bdev_get_bdevs", 00:05:44.770 "bdev_reset_iostat", 00:05:44.770 "bdev_get_iostat", 00:05:44.770 "bdev_examine", 00:05:44.770 "bdev_wait_for_examine", 00:05:44.770 "bdev_set_options", 00:05:44.770 "notify_get_notifications", 00:05:44.770 "notify_get_types", 00:05:44.770 "accel_get_stats", 00:05:44.770 "accel_set_options", 00:05:44.770 "accel_set_driver", 00:05:44.770 "accel_crypto_key_destroy", 00:05:44.770 "accel_crypto_keys_get", 00:05:44.770 "accel_crypto_key_create", 00:05:44.770 "accel_assign_opc", 00:05:44.770 "accel_get_module_info", 00:05:44.770 "accel_get_opc_assignments", 00:05:44.770 "vmd_rescan", 00:05:44.770 "vmd_remove_device", 00:05:44.770 "vmd_enable", 00:05:44.770 "sock_get_default_impl", 00:05:44.770 "sock_set_default_impl", 00:05:44.770 "sock_impl_set_options", 00:05:44.770 "sock_impl_get_options", 00:05:44.770 "iobuf_get_stats", 00:05:44.770 "iobuf_set_options", 00:05:44.770 "framework_get_pci_devices", 00:05:44.770 "framework_get_config", 00:05:44.770 "framework_get_subsystems", 00:05:44.770 "trace_get_info", 00:05:44.770 "trace_get_tpoint_group_mask", 00:05:44.770 "trace_disable_tpoint_group", 00:05:44.770 "trace_enable_tpoint_group", 00:05:44.770 "trace_clear_tpoint_mask", 00:05:44.770 "trace_set_tpoint_mask", 00:05:44.770 "keyring_get_keys", 00:05:44.770 "spdk_get_version", 00:05:44.770 "rpc_get_methods" 00:05:44.770 ] 00:05:44.770 10:27:52 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:44.770 10:27:52 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:44.770 10:27:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.770 10:27:52 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:44.770 10:27:52 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2046661 00:05:44.770 10:27:52 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2046661 ']' 00:05:44.770 10:27:52 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2046661 00:05:44.770 10:27:52 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:44.770 10:27:52 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:44.770 10:27:52 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2046661 00:05:44.770 10:27:52 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:44.770 10:27:52 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:44.770 10:27:52 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2046661' 00:05:44.770 killing process with pid 2046661 00:05:44.770 10:27:52 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2046661 00:05:44.770 10:27:52 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2046661 00:05:45.029 00:05:45.029 real 0m0.974s 00:05:45.029 user 0m1.650s 00:05:45.029 sys 0m0.395s 00:05:45.029 10:27:52 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.029 10:27:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.029 ************************************ 00:05:45.029 END TEST spdkcli_tcp 00:05:45.029 ************************************ 00:05:45.029 10:27:52 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.029 10:27:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.029 10:27:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.029 10:27:52 -- common/autotest_common.sh@10 -- # set +x 00:05:45.287 ************************************ 00:05:45.287 START TEST dpdk_mem_utility 00:05:45.287 ************************************ 00:05:45.287 10:27:52 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.287 * Looking for test storage... 00:05:45.287 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:45.287 10:27:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:45.287 10:27:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2046954 00:05:45.287 10:27:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2046954 00:05:45.288 10:27:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.288 10:27:52 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2046954 ']' 00:05:45.288 10:27:52 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.288 10:27:52 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.288 10:27:52 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.288 10:27:52 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.288 10:27:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.288 [2024-07-24 10:27:52.627389] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:05:45.288 [2024-07-24 10:27:52.627441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046954 ] 00:05:45.288 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.288 [2024-07-24 10:27:52.680956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.288 [2024-07-24 10:27:52.721831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.546 10:27:52 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.546 10:27:52 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:45.546 10:27:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:45.546 10:27:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:45.546 10:27:52 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.546 10:27:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.546 { 00:05:45.546 "filename": "/tmp/spdk_mem_dump.txt" 00:05:45.546 } 00:05:45.546 10:27:52 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.546 10:27:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:45.546 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:45.546 1 heaps totaling size 814.000000 MiB 00:05:45.546 size: 814.000000 MiB heap id: 0 00:05:45.546 end heaps---------- 00:05:45.546 8 mempools totaling size 598.116089 MiB 00:05:45.546 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:45.546 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:45.546 size: 84.521057 MiB name: bdev_io_2046954 00:05:45.546 size: 51.011292 MiB name: evtpool_2046954 00:05:45.546 size: 50.003479 MiB name: msgpool_2046954 00:05:45.546 size: 21.763794 MiB name: PDU_Pool 00:05:45.546 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:45.546 size: 0.026123 MiB name: Session_Pool 00:05:45.546 end mempools------- 00:05:45.546 6 memzones totaling size 4.142822 MiB 00:05:45.546 size: 1.000366 MiB name: RG_ring_0_2046954 00:05:45.546 size: 1.000366 MiB name: RG_ring_1_2046954 00:05:45.546 size: 1.000366 MiB name: RG_ring_4_2046954 00:05:45.546 size: 1.000366 MiB name: RG_ring_5_2046954 00:05:45.546 size: 0.125366 MiB name: RG_ring_2_2046954 00:05:45.546 size: 0.015991 MiB name: RG_ring_3_2046954 00:05:45.546 end memzones------- 00:05:45.546 10:27:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:45.805 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:45.805 list of free elements. size: 12.519348 MiB 00:05:45.805 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:45.805 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:45.805 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:45.805 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:45.805 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:45.805 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:45.805 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:45.805 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:45.805 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:45.805 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:45.805 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:45.805 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:45.805 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:45.805 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:45.805 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:45.805 list of standard malloc elements. size: 199.218079 MiB 00:05:45.805 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:45.805 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:45.805 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:45.805 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:45.805 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:45.805 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:45.805 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:45.805 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:45.805 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:45.805 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:45.805 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:45.805 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:45.805 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:45.805 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:45.805 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:45.805 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:45.805 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:45.805 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:45.805 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:45.805 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:45.805 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:45.805 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:45.805 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:45.805 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:45.805 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:45.805 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:45.805 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:45.805 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:45.805 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:45.805 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:45.805 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:45.805 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:45.805 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:45.805 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:45.805 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:45.805 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:45.806 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:45.806 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:45.806 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:45.806 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:45.806 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:45.806 list of memzone associated elements. size: 602.262573 MiB 00:05:45.806 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:45.806 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:45.806 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:45.806 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:45.806 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:45.806 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2046954_0 00:05:45.806 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:45.806 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2046954_0 00:05:45.806 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:45.806 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2046954_0 00:05:45.806 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:45.806 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:45.806 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:45.806 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:45.806 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:45.806 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2046954 00:05:45.806 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:45.806 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2046954 00:05:45.806 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:45.806 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2046954 00:05:45.806 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:45.806 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:45.806 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:45.806 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:45.806 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:45.806 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:45.806 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:45.806 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:45.806 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:45.806 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2046954 00:05:45.806 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:45.806 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2046954 00:05:45.806 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:45.806 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2046954 00:05:45.806 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:45.806 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2046954 00:05:45.806 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:45.806 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2046954 00:05:45.806 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:45.806 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:45.806 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:45.806 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:45.806 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:45.806 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:45.806 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:45.806 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2046954 00:05:45.806 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:45.806 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:45.806 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:45.806 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:45.806 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:45.806 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2046954 00:05:45.806 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:45.806 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:45.806 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:45.806 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2046954 00:05:45.806 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:45.806 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2046954 00:05:45.806 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:45.806 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:45.806 10:27:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:45.806 10:27:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2046954 00:05:45.806 10:27:53 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2046954 ']' 00:05:45.806 10:27:53 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2046954 00:05:45.806 10:27:53 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:45.806 10:27:53 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.806 10:27:53 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2046954 00:05:45.806 10:27:53 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.806 10:27:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.806 10:27:53 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2046954' 00:05:45.806 killing process with pid 2046954 00:05:45.806 10:27:53 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2046954 00:05:45.806 10:27:53 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2046954 00:05:46.066 00:05:46.066 real 0m0.852s 00:05:46.066 user 0m0.792s 00:05:46.066 sys 0m0.354s 00:05:46.066 10:27:53 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.066 10:27:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.066 ************************************ 00:05:46.066 END TEST dpdk_mem_utility 00:05:46.066 ************************************ 00:05:46.066 10:27:53 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:46.066 10:27:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.066 10:27:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.066 10:27:53 -- common/autotest_common.sh@10 -- # set +x 00:05:46.066 ************************************ 00:05:46.066 START TEST event 00:05:46.066 ************************************ 00:05:46.066 10:27:53 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:46.066 * Looking for test storage... 00:05:46.066 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:46.066 10:27:53 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:46.066 10:27:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:46.066 10:27:53 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.066 10:27:53 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:46.066 10:27:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.066 10:27:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.325 ************************************ 00:05:46.325 START TEST event_perf 00:05:46.325 ************************************ 00:05:46.325 10:27:53 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.325 Running I/O for 1 seconds...[2024-07-24 10:27:53.545894] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:05:46.325 [2024-07-24 10:27:53.545960] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2047234 ] 00:05:46.325 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.325 [2024-07-24 10:27:53.605187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.325 [2024-07-24 10:27:53.647787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.325 [2024-07-24 10:27:53.647885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.325 [2024-07-24 10:27:53.647978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.325 [2024-07-24 10:27:53.647979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.261 Running I/O for 1 seconds... 00:05:47.261 lcore 0: 214039 00:05:47.261 lcore 1: 214037 00:05:47.261 lcore 2: 214037 00:05:47.261 lcore 3: 214038 00:05:47.261 done. 00:05:47.261 00:05:47.261 real 0m1.186s 00:05:47.261 user 0m4.107s 00:05:47.261 sys 0m0.077s 00:05:47.261 10:27:54 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.261 10:27:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.261 ************************************ 00:05:47.261 END TEST event_perf 00:05:47.261 ************************************ 00:05:47.533 10:27:54 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:47.533 10:27:54 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:47.533 10:27:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.533 10:27:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.533 ************************************ 00:05:47.533 START TEST event_reactor 00:05:47.533 ************************************ 00:05:47.533 10:27:54 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:47.533 [2024-07-24 10:27:54.785814] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:05:47.533 [2024-07-24 10:27:54.785865] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2047484 ] 00:05:47.533 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.533 [2024-07-24 10:27:54.840186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.533 [2024-07-24 10:27:54.879992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.907 test_start 00:05:48.907 oneshot 00:05:48.907 tick 100 00:05:48.907 tick 100 00:05:48.907 tick 250 00:05:48.907 tick 100 00:05:48.907 tick 100 00:05:48.907 tick 250 00:05:48.907 tick 100 00:05:48.907 tick 500 00:05:48.907 tick 100 00:05:48.907 tick 100 00:05:48.907 tick 250 00:05:48.907 tick 100 00:05:48.907 tick 100 00:05:48.907 test_end 00:05:48.907 00:05:48.907 real 0m1.167s 00:05:48.907 user 0m1.095s 00:05:48.907 sys 0m0.069s 00:05:48.907 10:27:55 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.907 10:27:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:48.907 ************************************ 00:05:48.907 END TEST event_reactor 00:05:48.907 ************************************ 00:05:48.907 10:27:55 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:48.908 10:27:55 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:48.908 10:27:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.908 10:27:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.908 ************************************ 00:05:48.908 START TEST event_reactor_perf 00:05:48.908 ************************************ 00:05:48.908 10:27:56 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:48.908 [2024-07-24 10:27:56.017744] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:05:48.908 [2024-07-24 10:27:56.017797] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2047638 ] 00:05:48.908 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.908 [2024-07-24 10:27:56.071998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.908 [2024-07-24 10:27:56.111622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.844 test_start 00:05:49.844 test_end 00:05:49.844 Performance: 517536 events per second 00:05:49.844 00:05:49.844 real 0m1.166s 00:05:49.844 user 0m1.092s 00:05:49.844 sys 0m0.071s 00:05:49.844 10:27:57 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.844 10:27:57 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:49.844 ************************************ 00:05:49.844 END TEST event_reactor_perf 00:05:49.844 ************************************ 00:05:49.844 10:27:57 event -- event/event.sh@49 -- # uname -s 00:05:49.844 10:27:57 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:49.844 10:27:57 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:49.844 10:27:57 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.844 10:27:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.844 10:27:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.844 ************************************ 00:05:49.844 START TEST event_scheduler 00:05:49.844 ************************************ 00:05:49.844 10:27:57 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:50.102 * Looking for test storage... 00:05:50.102 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:05:50.102 10:27:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:50.102 10:27:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2047876 00:05:50.103 10:27:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.103 10:27:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:50.103 10:27:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2047876 00:05:50.103 10:27:57 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2047876 ']' 00:05:50.103 10:27:57 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.103 10:27:57 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.103 10:27:57 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.103 10:27:57 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.103 10:27:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.103 [2024-07-24 10:27:57.364342] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:05:50.103 [2024-07-24 10:27:57.364393] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2047876 ] 00:05:50.103 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.103 [2024-07-24 10:27:57.417280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:50.103 [2024-07-24 10:27:57.461027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.103 [2024-07-24 10:27:57.461113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.103 [2024-07-24 10:27:57.461213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.103 [2024-07-24 10:27:57.461215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.103 10:27:57 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.103 10:27:57 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:50.103 10:27:57 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:50.103 10:27:57 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.103 10:27:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.103 [2024-07-24 10:27:57.509691] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:50.103 [2024-07-24 10:27:57.509707] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:50.103 [2024-07-24 10:27:57.509715] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:50.103 [2024-07-24 10:27:57.509720] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:50.103 [2024-07-24 10:27:57.509725] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:50.103 10:27:57 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.103 10:27:57 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:50.103 10:27:57 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.103 10:27:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.362 [2024-07-24 10:27:57.577297] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:50.362 10:27:57 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.362 10:27:57 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:50.362 10:27:57 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.362 10:27:57 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.362 10:27:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.362 ************************************ 00:05:50.362 START TEST scheduler_create_thread 00:05:50.362 ************************************ 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.362 2 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.362 3 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.362 4 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.362 5 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.362 6 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.362 7 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.362 8 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.362 9 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.362 10 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.362 10:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.928 10:27:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.928 10:27:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:50.928 10:27:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.928 10:27:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.367 10:27:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.367 10:27:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:52.367 10:27:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:52.367 10:27:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.367 10:27:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.301 10:28:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.301 00:05:53.301 real 0m3.100s 00:05:53.301 user 0m0.023s 00:05:53.301 sys 0m0.006s 00:05:53.301 10:28:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.301 10:28:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.301 ************************************ 00:05:53.301 END TEST scheduler_create_thread 00:05:53.301 ************************************ 00:05:53.301 10:28:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:53.301 10:28:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2047876 00:05:53.301 10:28:00 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2047876 ']' 00:05:53.301 10:28:00 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2047876 00:05:53.301 10:28:00 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:53.301 10:28:00 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.301 10:28:00 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2047876 00:05:53.559 10:28:00 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:53.559 10:28:00 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:53.559 10:28:00 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2047876' 00:05:53.559 killing process with pid 2047876 00:05:53.559 10:28:00 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2047876 00:05:53.559 10:28:00 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2047876 00:05:53.818 [2024-07-24 10:28:01.092590] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:54.076 00:05:54.076 real 0m4.064s 00:05:54.076 user 0m6.540s 00:05:54.076 sys 0m0.319s 00:05:54.076 10:28:01 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.076 10:28:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.076 ************************************ 00:05:54.076 END TEST event_scheduler 00:05:54.076 ************************************ 00:05:54.076 10:28:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:54.076 10:28:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:54.076 10:28:01 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.077 10:28:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.077 10:28:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.077 ************************************ 00:05:54.077 START TEST app_repeat 00:05:54.077 ************************************ 00:05:54.077 10:28:01 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:54.077 10:28:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.077 10:28:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.077 10:28:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:54.077 10:28:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.077 10:28:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:54.077 10:28:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:54.077 10:28:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:54.077 10:28:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2048575 00:05:54.077 10:28:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.077 10:28:01 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:54.077 10:28:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2048575' 00:05:54.077 Process app_repeat pid: 2048575 00:05:54.077 10:28:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.077 10:28:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:54.077 spdk_app_start Round 0 00:05:54.077 10:28:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2048575 /var/tmp/spdk-nbd.sock 00:05:54.077 10:28:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2048575 ']' 00:05:54.077 10:28:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.077 10:28:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.077 10:28:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.077 10:28:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.077 10:28:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.077 [2024-07-24 10:28:01.396601] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:05:54.077 [2024-07-24 10:28:01.396655] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2048575 ] 00:05:54.077 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.077 [2024-07-24 10:28:01.451265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.077 [2024-07-24 10:28:01.493604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.077 [2024-07-24 10:28:01.493607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.335 10:28:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.335 10:28:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:54.335 10:28:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.335 Malloc0 00:05:54.335 10:28:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.594 Malloc1 00:05:54.594 10:28:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.594 10:28:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.594 10:28:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.594 10:28:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.594 10:28:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.594 10:28:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.594 10:28:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.594 10:28:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.594 10:28:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.594 10:28:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.594 10:28:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.594 10:28:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.594 10:28:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.594 10:28:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.594 10:28:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.594 10:28:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.852 /dev/nbd0 00:05:54.852 10:28:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.852 10:28:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.852 10:28:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:54.852 10:28:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:54.852 10:28:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:54.852 10:28:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:54.852 10:28:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:54.852 10:28:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:54.852 10:28:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:54.852 10:28:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:54.852 10:28:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.852 1+0 records in 00:05:54.852 1+0 records out 00:05:54.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024438 s, 16.8 MB/s 00:05:54.852 10:28:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:54.852 10:28:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:54.852 10:28:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:54.852 10:28:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:54.852 10:28:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:54.852 10:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.852 10:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.852 10:28:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.110 /dev/nbd1 00:05:55.110 10:28:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.110 10:28:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.110 10:28:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:55.110 10:28:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:55.110 10:28:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:55.110 10:28:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:55.110 10:28:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:55.110 10:28:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:55.110 10:28:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:55.110 10:28:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:55.110 10:28:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.110 1+0 records in 00:05:55.110 1+0 records out 00:05:55.110 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224708 s, 18.2 MB/s 00:05:55.110 10:28:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:55.110 10:28:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:55.110 10:28:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:55.110 10:28:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:55.110 10:28:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:55.110 10:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.110 10:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.110 10:28:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.110 10:28:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.110 10:28:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.110 10:28:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.110 { 00:05:55.110 "nbd_device": "/dev/nbd0", 00:05:55.110 "bdev_name": "Malloc0" 00:05:55.110 }, 00:05:55.110 { 00:05:55.110 "nbd_device": "/dev/nbd1", 00:05:55.110 "bdev_name": "Malloc1" 00:05:55.110 } 00:05:55.110 ]' 00:05:55.110 10:28:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.110 { 00:05:55.110 "nbd_device": "/dev/nbd0", 00:05:55.110 "bdev_name": "Malloc0" 00:05:55.110 }, 00:05:55.110 { 00:05:55.110 "nbd_device": "/dev/nbd1", 00:05:55.110 "bdev_name": "Malloc1" 00:05:55.110 } 00:05:55.110 ]' 00:05:55.110 10:28:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.369 /dev/nbd1' 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.369 /dev/nbd1' 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.369 256+0 records in 00:05:55.369 256+0 records out 00:05:55.369 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00743564 s, 141 MB/s 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.369 256+0 records in 00:05:55.369 256+0 records out 00:05:55.369 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134794 s, 77.8 MB/s 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.369 256+0 records in 00:05:55.369 256+0 records out 00:05:55.369 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144124 s, 72.8 MB/s 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.369 10:28:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.627 10:28:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.627 10:28:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.627 10:28:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.627 10:28:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.627 10:28:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.627 10:28:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.627 10:28:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.627 10:28:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.627 10:28:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.627 10:28:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.627 10:28:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.627 10:28:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.627 10:28:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.627 10:28:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.628 10:28:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.628 10:28:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.628 10:28:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.628 10:28:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.628 10:28:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.628 10:28:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.628 10:28:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.886 10:28:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.886 10:28:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.886 10:28:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.886 10:28:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.886 10:28:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.886 10:28:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.886 10:28:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.886 10:28:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.886 10:28:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.886 10:28:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.886 10:28:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.886 10:28:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.886 10:28:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.145 10:28:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.404 [2024-07-24 10:28:03.638785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.404 [2024-07-24 10:28:03.675389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.404 [2024-07-24 10:28:03.675393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.404 [2024-07-24 10:28:03.716018] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.404 [2024-07-24 10:28:03.716057] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.690 10:28:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.690 10:28:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:59.690 spdk_app_start Round 1 00:05:59.690 10:28:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2048575 /var/tmp/spdk-nbd.sock 00:05:59.690 10:28:06 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2048575 ']' 00:05:59.690 10:28:06 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.690 10:28:06 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.690 10:28:06 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.690 10:28:06 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.690 10:28:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.690 10:28:06 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.690 10:28:06 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:59.690 10:28:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.690 Malloc0 00:05:59.690 10:28:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.690 Malloc1 00:05:59.690 10:28:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.690 10:28:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.690 10:28:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.690 10:28:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.690 10:28:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.690 10:28:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.690 10:28:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.690 10:28:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.690 10:28:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.690 10:28:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.690 10:28:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.690 10:28:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.690 10:28:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.690 10:28:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.690 10:28:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.690 10:28:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:59.949 /dev/nbd0 00:05:59.949 10:28:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.949 10:28:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.949 10:28:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:59.949 10:28:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:59.949 10:28:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:59.949 10:28:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:59.949 10:28:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:59.949 10:28:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:59.949 10:28:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:59.949 10:28:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:59.949 10:28:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.949 1+0 records in 00:05:59.949 1+0 records out 00:05:59.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018271 s, 22.4 MB/s 00:05:59.949 10:28:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:59.949 10:28:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:59.949 10:28:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:59.949 10:28:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:59.949 10:28:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:59.949 10:28:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.949 10:28:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.949 10:28:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.207 /dev/nbd1 00:06:00.208 10:28:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.208 10:28:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.208 10:28:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:00.208 10:28:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:00.208 10:28:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:00.208 10:28:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:00.208 10:28:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:00.208 10:28:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:00.208 10:28:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:00.208 10:28:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:00.208 10:28:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.208 1+0 records in 00:06:00.208 1+0 records out 00:06:00.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180435 s, 22.7 MB/s 00:06:00.208 10:28:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:00.208 10:28:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:00.208 10:28:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:00.208 10:28:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:00.208 10:28:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:00.208 10:28:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.208 10:28:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.208 10:28:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.208 10:28:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.208 10:28:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.208 10:28:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.208 { 00:06:00.208 "nbd_device": "/dev/nbd0", 00:06:00.208 "bdev_name": "Malloc0" 00:06:00.208 }, 00:06:00.208 { 00:06:00.208 "nbd_device": "/dev/nbd1", 00:06:00.208 "bdev_name": "Malloc1" 00:06:00.208 } 00:06:00.208 ]' 00:06:00.208 10:28:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.208 { 00:06:00.208 "nbd_device": "/dev/nbd0", 00:06:00.208 "bdev_name": "Malloc0" 00:06:00.208 }, 00:06:00.208 { 00:06:00.208 "nbd_device": "/dev/nbd1", 00:06:00.208 "bdev_name": "Malloc1" 00:06:00.208 } 00:06:00.208 ]' 00:06:00.208 10:28:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.465 /dev/nbd1' 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.465 /dev/nbd1' 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.465 256+0 records in 00:06:00.465 256+0 records out 00:06:00.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103472 s, 101 MB/s 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.465 256+0 records in 00:06:00.465 256+0 records out 00:06:00.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135538 s, 77.4 MB/s 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.465 256+0 records in 00:06:00.465 256+0 records out 00:06:00.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151135 s, 69.4 MB/s 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.465 10:28:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.723 10:28:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.723 10:28:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.723 10:28:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.723 10:28:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.723 10:28:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.723 10:28:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.723 10:28:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.723 10:28:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.723 10:28:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.723 10:28:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.723 10:28:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.723 10:28:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.723 10:28:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.723 10:28:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.723 10:28:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.723 10:28:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.723 10:28:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.723 10:28:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.723 10:28:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.723 10:28:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.723 10:28:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.980 10:28:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.980 10:28:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.981 10:28:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.981 10:28:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.981 10:28:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.981 10:28:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.981 10:28:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:00.981 10:28:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.981 10:28:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.981 10:28:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.981 10:28:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.981 10:28:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.981 10:28:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.238 10:28:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:01.496 [2024-07-24 10:28:08.754414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.496 [2024-07-24 10:28:08.793016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.496 [2024-07-24 10:28:08.793018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.496 [2024-07-24 10:28:08.834532] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.496 [2024-07-24 10:28:08.834574] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.775 10:28:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.775 10:28:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:04.775 spdk_app_start Round 2 00:06:04.775 10:28:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2048575 /var/tmp/spdk-nbd.sock 00:06:04.775 10:28:11 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2048575 ']' 00:06:04.775 10:28:11 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.775 10:28:11 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.775 10:28:11 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.775 10:28:11 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.775 10:28:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.775 10:28:11 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.775 10:28:11 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:04.775 10:28:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.775 Malloc0 00:06:04.775 10:28:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.775 Malloc1 00:06:04.775 10:28:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.775 10:28:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.775 10:28:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.775 10:28:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.775 10:28:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.775 10:28:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.775 10:28:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.775 10:28:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.775 10:28:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.775 10:28:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.775 10:28:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.775 10:28:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.775 10:28:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:04.775 10:28:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.776 10:28:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.776 10:28:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.034 /dev/nbd0 00:06:05.034 10:28:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.034 10:28:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.034 10:28:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:05.034 10:28:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:05.034 10:28:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:05.034 10:28:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:05.034 10:28:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:05.034 10:28:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:05.034 10:28:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:05.034 10:28:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:05.034 10:28:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.034 1+0 records in 00:06:05.034 1+0 records out 00:06:05.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215506 s, 19.0 MB/s 00:06:05.034 10:28:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:05.034 10:28:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:05.034 10:28:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:05.034 10:28:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:05.034 10:28:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:05.034 10:28:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.034 10:28:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.034 10:28:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.034 /dev/nbd1 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.292 10:28:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:05.292 10:28:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:05.292 10:28:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:05.292 10:28:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:05.292 10:28:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:05.292 10:28:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:05.292 10:28:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:05.292 10:28:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:05.292 10:28:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.292 1+0 records in 00:06:05.292 1+0 records out 00:06:05.292 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000128755 s, 31.8 MB/s 00:06:05.292 10:28:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:05.292 10:28:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:05.292 10:28:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:05.292 10:28:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:05.292 10:28:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.292 { 00:06:05.292 "nbd_device": "/dev/nbd0", 00:06:05.292 "bdev_name": "Malloc0" 00:06:05.292 }, 00:06:05.292 { 00:06:05.292 "nbd_device": "/dev/nbd1", 00:06:05.292 "bdev_name": "Malloc1" 00:06:05.292 } 00:06:05.292 ]' 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.292 { 00:06:05.292 "nbd_device": "/dev/nbd0", 00:06:05.292 "bdev_name": "Malloc0" 00:06:05.292 }, 00:06:05.292 { 00:06:05.292 "nbd_device": "/dev/nbd1", 00:06:05.292 "bdev_name": "Malloc1" 00:06:05.292 } 00:06:05.292 ]' 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.292 /dev/nbd1' 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.292 /dev/nbd1' 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.292 10:28:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.551 256+0 records in 00:06:05.551 256+0 records out 00:06:05.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103805 s, 101 MB/s 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.551 256+0 records in 00:06:05.551 256+0 records out 00:06:05.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132154 s, 79.3 MB/s 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.551 256+0 records in 00:06:05.551 256+0 records out 00:06:05.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014245 s, 73.6 MB/s 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.551 10:28:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.551 10:28:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.551 10:28:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.551 10:28:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.551 10:28:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.809 10:28:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.809 10:28:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.809 10:28:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.809 10:28:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.809 10:28:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.809 10:28:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.809 10:28:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.809 10:28:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.809 10:28:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.809 10:28:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.809 10:28:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.067 10:28:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.067 10:28:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.067 10:28:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.067 10:28:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.067 10:28:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.067 10:28:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.067 10:28:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.067 10:28:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.067 10:28:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.067 10:28:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.067 10:28:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.067 10:28:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.067 10:28:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.325 10:28:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:06.325 [2024-07-24 10:28:13.772766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.583 [2024-07-24 10:28:13.810201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.583 [2024-07-24 10:28:13.810203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.583 [2024-07-24 10:28:13.851106] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:06.583 [2024-07-24 10:28:13.851145] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.865 10:28:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2048575 /var/tmp/spdk-nbd.sock 00:06:09.865 10:28:16 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2048575 ']' 00:06:09.865 10:28:16 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.865 10:28:16 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.865 10:28:16 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.865 10:28:16 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.865 10:28:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.865 10:28:16 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.865 10:28:16 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:09.865 10:28:16 event.app_repeat -- event/event.sh@39 -- # killprocess 2048575 00:06:09.865 10:28:16 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2048575 ']' 00:06:09.865 10:28:16 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2048575 00:06:09.865 10:28:16 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:09.865 10:28:16 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.865 10:28:16 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2048575 00:06:09.865 10:28:16 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.865 10:28:16 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.865 10:28:16 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2048575' 00:06:09.865 killing process with pid 2048575 00:06:09.865 10:28:16 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2048575 00:06:09.865 10:28:16 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2048575 00:06:09.865 spdk_app_start is called in Round 0. 00:06:09.865 Shutdown signal received, stop current app iteration 00:06:09.865 Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 reinitialization... 00:06:09.865 spdk_app_start is called in Round 1. 00:06:09.865 Shutdown signal received, stop current app iteration 00:06:09.865 Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 reinitialization... 00:06:09.866 spdk_app_start is called in Round 2. 00:06:09.866 Shutdown signal received, stop current app iteration 00:06:09.866 Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 reinitialization... 00:06:09.866 spdk_app_start is called in Round 3. 00:06:09.866 Shutdown signal received, stop current app iteration 00:06:09.866 10:28:16 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:09.866 10:28:16 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:09.866 00:06:09.866 real 0m15.616s 00:06:09.866 user 0m33.978s 00:06:09.866 sys 0m2.328s 00:06:09.866 10:28:16 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.866 10:28:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.866 ************************************ 00:06:09.866 END TEST app_repeat 00:06:09.866 ************************************ 00:06:09.866 10:28:17 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:09.866 10:28:17 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:09.866 10:28:17 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.866 10:28:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.866 10:28:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.866 ************************************ 00:06:09.866 START TEST cpu_locks 00:06:09.866 ************************************ 00:06:09.866 10:28:17 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:09.866 * Looking for test storage... 00:06:09.866 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:09.866 10:28:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:09.866 10:28:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:09.866 10:28:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:09.866 10:28:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:09.866 10:28:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.866 10:28:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.866 10:28:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.866 ************************************ 00:06:09.866 START TEST default_locks 00:06:09.866 ************************************ 00:06:09.866 10:28:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:09.866 10:28:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2051519 00:06:09.866 10:28:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2051519 00:06:09.866 10:28:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.866 10:28:17 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2051519 ']' 00:06:09.866 10:28:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.866 10:28:17 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.866 10:28:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.866 10:28:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.866 10:28:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.866 [2024-07-24 10:28:17.213437] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:06:09.866 [2024-07-24 10:28:17.213482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2051519 ] 00:06:09.866 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.866 [2024-07-24 10:28:17.268099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.866 [2024-07-24 10:28:17.308027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.125 10:28:17 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.125 10:28:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:10.125 10:28:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2051519 00:06:10.125 10:28:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2051519 00:06:10.125 10:28:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.690 lslocks: write error 00:06:10.690 10:28:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2051519 00:06:10.690 10:28:17 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2051519 ']' 00:06:10.690 10:28:17 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2051519 00:06:10.690 10:28:17 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:10.690 10:28:17 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.690 10:28:17 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2051519 00:06:10.690 10:28:17 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.690 10:28:17 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.690 10:28:17 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2051519' 00:06:10.690 killing process with pid 2051519 00:06:10.690 10:28:17 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2051519 00:06:10.690 10:28:17 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2051519 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2051519 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2051519 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2051519 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2051519 ']' 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.948 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2051519) - No such process 00:06:10.948 ERROR: process (pid: 2051519) is no longer running 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:10.948 00:06:10.948 real 0m1.094s 00:06:10.948 user 0m1.031s 00:06:10.948 sys 0m0.501s 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.948 10:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.948 ************************************ 00:06:10.948 END TEST default_locks 00:06:10.948 ************************************ 00:06:10.948 10:28:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:10.948 10:28:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.948 10:28:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.948 10:28:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.948 ************************************ 00:06:10.948 START TEST default_locks_via_rpc 00:06:10.948 ************************************ 00:06:10.948 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:10.948 10:28:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2051776 00:06:10.948 10:28:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.948 10:28:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2051776 00:06:10.948 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2051776 ']' 00:06:10.948 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.948 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.948 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.948 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.948 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.948 [2024-07-24 10:28:18.368030] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:06:10.948 [2024-07-24 10:28:18.368071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2051776 ] 00:06:10.948 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.206 [2024-07-24 10:28:18.414537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.206 [2024-07-24 10:28:18.453841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.206 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.206 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:11.206 10:28:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:11.206 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.206 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.206 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.206 10:28:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:11.206 10:28:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:11.206 10:28:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:11.206 10:28:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:11.206 10:28:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:11.206 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.206 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.206 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.206 10:28:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2051776 00:06:11.206 10:28:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2051776 00:06:11.206 10:28:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.464 10:28:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2051776 00:06:11.464 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2051776 ']' 00:06:11.464 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2051776 00:06:11.464 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:11.464 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.464 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2051776 00:06:11.464 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.464 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.464 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2051776' 00:06:11.464 killing process with pid 2051776 00:06:11.464 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2051776 00:06:11.464 10:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2051776 00:06:11.722 00:06:11.722 real 0m0.773s 00:06:11.722 user 0m0.728s 00:06:11.722 sys 0m0.347s 00:06:11.722 10:28:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.722 10:28:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.722 ************************************ 00:06:11.722 END TEST default_locks_via_rpc 00:06:11.722 ************************************ 00:06:11.722 10:28:19 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:11.722 10:28:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.723 10:28:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.723 10:28:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.723 ************************************ 00:06:11.723 START TEST non_locking_app_on_locked_coremask 00:06:11.723 ************************************ 00:06:11.723 10:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:11.723 10:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2051823 00:06:11.723 10:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2051823 /var/tmp/spdk.sock 00:06:11.723 10:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.723 10:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2051823 ']' 00:06:11.723 10:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.723 10:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.723 10:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.723 10:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.723 10:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.981 [2024-07-24 10:28:19.216106] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:06:11.981 [2024-07-24 10:28:19.216149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2051823 ] 00:06:11.981 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.981 [2024-07-24 10:28:19.269301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.981 [2024-07-24 10:28:19.309251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.238 10:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.238 10:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:12.238 10:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2052030 00:06:12.238 10:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2052030 /var/tmp/spdk2.sock 00:06:12.238 10:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:12.238 10:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2052030 ']' 00:06:12.239 10:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.239 10:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.239 10:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.239 10:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.239 10:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.239 [2024-07-24 10:28:19.543229] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:06:12.239 [2024-07-24 10:28:19.543278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052030 ] 00:06:12.239 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.239 [2024-07-24 10:28:19.618611] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.239 [2024-07-24 10:28:19.618638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.497 [2024-07-24 10:28:19.700439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.062 10:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.062 10:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:13.062 10:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2051823 00:06:13.062 10:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2051823 00:06:13.062 10:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.320 lslocks: write error 00:06:13.320 10:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2051823 00:06:13.320 10:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2051823 ']' 00:06:13.320 10:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2051823 00:06:13.320 10:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:13.320 10:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.320 10:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2051823 00:06:13.320 10:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.320 10:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.320 10:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2051823' 00:06:13.320 killing process with pid 2051823 00:06:13.320 10:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2051823 00:06:13.320 10:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2051823 00:06:14.251 10:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2052030 00:06:14.251 10:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2052030 ']' 00:06:14.251 10:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2052030 00:06:14.251 10:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:14.251 10:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.251 10:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2052030 00:06:14.251 10:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.251 10:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.251 10:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2052030' 00:06:14.251 killing process with pid 2052030 00:06:14.251 10:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2052030 00:06:14.251 10:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2052030 00:06:14.251 00:06:14.251 real 0m2.515s 00:06:14.251 user 0m2.604s 00:06:14.251 sys 0m0.830s 00:06:14.251 10:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.251 10:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.251 ************************************ 00:06:14.251 END TEST non_locking_app_on_locked_coremask 00:06:14.251 ************************************ 00:06:14.510 10:28:21 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:14.510 10:28:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.510 10:28:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.510 10:28:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.510 ************************************ 00:06:14.510 START TEST locking_app_on_unlocked_coremask 00:06:14.510 ************************************ 00:06:14.510 10:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:14.510 10:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2052310 00:06:14.510 10:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2052310 /var/tmp/spdk.sock 00:06:14.510 10:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:14.510 10:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2052310 ']' 00:06:14.510 10:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.510 10:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.510 10:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.510 10:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.510 10:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.510 [2024-07-24 10:28:21.780477] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:06:14.510 [2024-07-24 10:28:21.780518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052310 ] 00:06:14.510 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.510 [2024-07-24 10:28:21.834166] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.510 [2024-07-24 10:28:21.834189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.510 [2024-07-24 10:28:21.876090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.769 10:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.769 10:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:14.769 10:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2052421 00:06:14.769 10:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:14.769 10:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2052421 /var/tmp/spdk2.sock 00:06:14.769 10:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2052421 ']' 00:06:14.769 10:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.769 10:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.769 10:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.769 10:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.769 10:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.769 [2024-07-24 10:28:22.098921] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:06:14.769 [2024-07-24 10:28:22.098970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052421 ] 00:06:14.769 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.769 [2024-07-24 10:28:22.168471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.124 [2024-07-24 10:28:22.254450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.709 10:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.709 10:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:15.709 10:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2052421 00:06:15.709 10:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2052421 00:06:15.709 10:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.967 lslocks: write error 00:06:15.967 10:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2052310 00:06:15.967 10:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2052310 ']' 00:06:15.967 10:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2052310 00:06:15.967 10:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:15.967 10:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.967 10:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2052310 00:06:15.967 10:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.967 10:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.967 10:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2052310' 00:06:15.967 killing process with pid 2052310 00:06:15.967 10:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2052310 00:06:15.967 10:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2052310 00:06:16.533 10:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2052421 00:06:16.533 10:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2052421 ']' 00:06:16.533 10:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2052421 00:06:16.533 10:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:16.533 10:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.533 10:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2052421 00:06:16.533 10:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.533 10:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.533 10:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2052421' 00:06:16.533 killing process with pid 2052421 00:06:16.533 10:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2052421 00:06:16.533 10:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2052421 00:06:16.791 00:06:16.791 real 0m2.446s 00:06:16.791 user 0m2.536s 00:06:16.791 sys 0m0.772s 00:06:16.791 10:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.791 10:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.791 ************************************ 00:06:16.791 END TEST locking_app_on_unlocked_coremask 00:06:16.791 ************************************ 00:06:16.791 10:28:24 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:16.791 10:28:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.791 10:28:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.791 10:28:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.050 ************************************ 00:06:17.050 START TEST locking_app_on_locked_coremask 00:06:17.050 ************************************ 00:06:17.050 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:17.050 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2052812 00:06:17.050 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2052812 /var/tmp/spdk.sock 00:06:17.050 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.050 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2052812 ']' 00:06:17.050 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.050 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.050 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.050 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.050 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.050 [2024-07-24 10:28:24.309147] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:06:17.050 [2024-07-24 10:28:24.309193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052812 ] 00:06:17.050 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.050 [2024-07-24 10:28:24.362466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.050 [2024-07-24 10:28:24.399335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.309 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.309 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:17.309 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2052819 00:06:17.309 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2052819 /var/tmp/spdk2.sock 00:06:17.309 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:17.309 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:17.309 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2052819 /var/tmp/spdk2.sock 00:06:17.309 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:17.309 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.309 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:17.309 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.309 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2052819 /var/tmp/spdk2.sock 00:06:17.309 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2052819 ']' 00:06:17.309 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.309 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.309 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.309 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.309 10:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.309 [2024-07-24 10:28:24.624953] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:06:17.309 [2024-07-24 10:28:24.624998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052819 ] 00:06:17.309 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.309 [2024-07-24 10:28:24.702114] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2052812 has claimed it. 00:06:17.309 [2024-07-24 10:28:24.702150] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:17.874 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2052819) - No such process 00:06:17.874 ERROR: process (pid: 2052819) is no longer running 00:06:17.874 10:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.874 10:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:17.874 10:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:17.874 10:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.874 10:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:17.874 10:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.874 10:28:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2052812 00:06:17.874 10:28:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2052812 00:06:17.874 10:28:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.441 lslocks: write error 00:06:18.441 10:28:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2052812 00:06:18.441 10:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2052812 ']' 00:06:18.441 10:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2052812 00:06:18.441 10:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:18.441 10:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.441 10:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2052812 00:06:18.441 10:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.441 10:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.441 10:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2052812' 00:06:18.441 killing process with pid 2052812 00:06:18.441 10:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2052812 00:06:18.441 10:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2052812 00:06:18.699 00:06:18.699 real 0m1.893s 00:06:18.699 user 0m2.011s 00:06:18.699 sys 0m0.624s 00:06:18.699 10:28:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.699 10:28:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.699 ************************************ 00:06:18.699 END TEST locking_app_on_locked_coremask 00:06:18.699 ************************************ 00:06:18.957 10:28:26 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:18.957 10:28:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.957 10:28:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.958 10:28:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.958 ************************************ 00:06:18.958 START TEST locking_overlapped_coremask 00:06:18.958 ************************************ 00:06:18.958 10:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:18.958 10:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2053195 00:06:18.958 10:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2053195 /var/tmp/spdk.sock 00:06:18.958 10:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2053195 ']' 00:06:18.958 10:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.958 10:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.958 10:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.958 10:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:18.958 10:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.958 10:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.958 [2024-07-24 10:28:26.260077] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:06:18.958 [2024-07-24 10:28:26.260116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2053195 ] 00:06:18.958 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.958 [2024-07-24 10:28:26.314852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.958 [2024-07-24 10:28:26.357927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.958 [2024-07-24 10:28:26.358027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.958 [2024-07-24 10:28:26.358029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.217 10:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.217 10:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:19.217 10:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2053301 00:06:19.217 10:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2053301 /var/tmp/spdk2.sock 00:06:19.217 10:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:19.217 10:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:19.217 10:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2053301 /var/tmp/spdk2.sock 00:06:19.217 10:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:19.217 10:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.217 10:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:19.217 10:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.217 10:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2053301 /var/tmp/spdk2.sock 00:06:19.217 10:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2053301 ']' 00:06:19.217 10:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.217 10:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.217 10:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.217 10:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.217 10:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.217 [2024-07-24 10:28:26.585520] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:06:19.217 [2024-07-24 10:28:26.585566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2053301 ] 00:06:19.217 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.217 [2024-07-24 10:28:26.657826] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2053195 has claimed it. 00:06:19.217 [2024-07-24 10:28:26.657861] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:19.783 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2053301) - No such process 00:06:19.783 ERROR: process (pid: 2053301) is no longer running 00:06:19.783 10:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.783 10:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:19.783 10:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:19.783 10:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.783 10:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:19.783 10:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.783 10:28:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:19.783 10:28:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:19.783 10:28:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:19.783 10:28:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:19.783 10:28:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2053195 00:06:19.783 10:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2053195 ']' 00:06:19.783 10:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2053195 00:06:19.783 10:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:19.783 10:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.783 10:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2053195 00:06:20.044 10:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.044 10:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.044 10:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2053195' 00:06:20.044 killing process with pid 2053195 00:06:20.044 10:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2053195 00:06:20.044 10:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2053195 00:06:20.310 00:06:20.310 real 0m1.346s 00:06:20.310 user 0m3.648s 00:06:20.310 sys 0m0.375s 00:06:20.310 10:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.310 10:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.310 ************************************ 00:06:20.310 END TEST locking_overlapped_coremask 00:06:20.310 ************************************ 00:06:20.310 10:28:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:20.310 10:28:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.310 10:28:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.310 10:28:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.310 ************************************ 00:06:20.310 START TEST locking_overlapped_coremask_via_rpc 00:06:20.310 ************************************ 00:06:20.310 10:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:20.310 10:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:20.310 10:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2053462 00:06:20.310 10:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2053462 /var/tmp/spdk.sock 00:06:20.310 10:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2053462 ']' 00:06:20.310 10:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.310 10:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.310 10:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.310 10:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.310 10:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.310 [2024-07-24 10:28:27.640489] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:06:20.310 [2024-07-24 10:28:27.640533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2053462 ] 00:06:20.310 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.310 [2024-07-24 10:28:27.694256] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.310 [2024-07-24 10:28:27.694279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.310 [2024-07-24 10:28:27.737968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.310 [2024-07-24 10:28:27.738069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.310 [2024-07-24 10:28:27.738070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.568 10:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.568 10:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:20.568 10:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2053564 00:06:20.568 10:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2053564 /var/tmp/spdk2.sock 00:06:20.568 10:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:20.568 10:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2053564 ']' 00:06:20.568 10:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.568 10:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.568 10:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.568 10:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.568 10:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.568 [2024-07-24 10:28:27.971547] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:06:20.568 [2024-07-24 10:28:27.971597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2053564 ] 00:06:20.568 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.827 [2024-07-24 10:28:28.046093] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.827 [2024-07-24 10:28:28.046117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.827 [2024-07-24 10:28:28.127504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.827 [2024-07-24 10:28:28.130537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.827 [2024-07-24 10:28:28.130538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.394 [2024-07-24 10:28:28.786559] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2053462 has claimed it. 00:06:21.394 request: 00:06:21.394 { 00:06:21.394 "method": "framework_enable_cpumask_locks", 00:06:21.394 "req_id": 1 00:06:21.394 } 00:06:21.394 Got JSON-RPC error response 00:06:21.394 response: 00:06:21.394 { 00:06:21.394 "code": -32603, 00:06:21.394 "message": "Failed to claim CPU core: 2" 00:06:21.394 } 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2053462 /var/tmp/spdk.sock 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2053462 ']' 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.394 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.656 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.656 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:21.657 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2053564 /var/tmp/spdk2.sock 00:06:21.657 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2053564 ']' 00:06:21.657 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.657 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.657 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.657 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.657 10:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.915 10:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.915 10:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:21.915 10:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:21.915 10:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:21.915 10:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:21.915 10:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:21.915 00:06:21.915 real 0m1.563s 00:06:21.915 user 0m0.736s 00:06:21.915 sys 0m0.126s 00:06:21.915 10:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.915 10:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.915 ************************************ 00:06:21.915 END TEST locking_overlapped_coremask_via_rpc 00:06:21.915 ************************************ 00:06:21.915 10:28:29 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:21.915 10:28:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2053462 ]] 00:06:21.915 10:28:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2053462 00:06:21.915 10:28:29 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2053462 ']' 00:06:21.915 10:28:29 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2053462 00:06:21.915 10:28:29 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:21.915 10:28:29 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.915 10:28:29 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2053462 00:06:21.915 10:28:29 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:21.915 10:28:29 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:21.915 10:28:29 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2053462' 00:06:21.915 killing process with pid 2053462 00:06:21.915 10:28:29 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2053462 00:06:21.915 10:28:29 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2053462 00:06:22.174 10:28:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2053564 ]] 00:06:22.174 10:28:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2053564 00:06:22.174 10:28:29 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2053564 ']' 00:06:22.174 10:28:29 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2053564 00:06:22.174 10:28:29 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:22.174 10:28:29 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.174 10:28:29 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2053564 00:06:22.174 10:28:29 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:22.174 10:28:29 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:22.174 10:28:29 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2053564' 00:06:22.174 killing process with pid 2053564 00:06:22.174 10:28:29 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2053564 00:06:22.174 10:28:29 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2053564 00:06:22.742 10:28:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:22.742 10:28:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:22.742 10:28:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2053462 ]] 00:06:22.742 10:28:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2053462 00:06:22.742 10:28:29 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2053462 ']' 00:06:22.742 10:28:29 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2053462 00:06:22.742 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2053462) - No such process 00:06:22.742 10:28:29 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2053462 is not found' 00:06:22.742 Process with pid 2053462 is not found 00:06:22.742 10:28:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2053564 ]] 00:06:22.742 10:28:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2053564 00:06:22.742 10:28:29 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2053564 ']' 00:06:22.742 10:28:29 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2053564 00:06:22.742 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2053564) - No such process 00:06:22.742 10:28:29 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2053564 is not found' 00:06:22.742 Process with pid 2053564 is not found 00:06:22.742 10:28:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:22.742 00:06:22.742 real 0m12.863s 00:06:22.742 user 0m22.379s 00:06:22.742 sys 0m4.418s 00:06:22.742 10:28:29 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.742 10:28:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.742 ************************************ 00:06:22.742 END TEST cpu_locks 00:06:22.742 ************************************ 00:06:22.742 00:06:22.742 real 0m36.536s 00:06:22.742 user 1m9.374s 00:06:22.742 sys 0m7.607s 00:06:22.742 10:28:29 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.742 10:28:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.742 ************************************ 00:06:22.742 END TEST event 00:06:22.742 ************************************ 00:06:22.742 10:28:29 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:22.742 10:28:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.742 10:28:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.742 10:28:29 -- common/autotest_common.sh@10 -- # set +x 00:06:22.742 ************************************ 00:06:22.742 START TEST thread 00:06:22.742 ************************************ 00:06:22.742 10:28:30 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:22.742 * Looking for test storage... 00:06:22.742 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:22.742 10:28:30 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:22.742 10:28:30 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:22.742 10:28:30 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.742 10:28:30 thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.742 ************************************ 00:06:22.742 START TEST thread_poller_perf 00:06:22.742 ************************************ 00:06:22.742 10:28:30 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:22.742 [2024-07-24 10:28:30.138128] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:06:22.742 [2024-07-24 10:28:30.138194] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2053908 ] 00:06:22.742 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.742 [2024-07-24 10:28:30.196732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.000 [2024-07-24 10:28:30.237097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.000 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:23.936 ====================================== 00:06:23.936 busy:2106452840 (cyc) 00:06:23.936 total_run_count: 417000 00:06:23.936 tsc_hz: 2100000000 (cyc) 00:06:23.936 ====================================== 00:06:23.936 poller_cost: 5051 (cyc), 2405 (nsec) 00:06:23.936 00:06:23.936 real 0m1.183s 00:06:23.936 user 0m1.108s 00:06:23.936 sys 0m0.071s 00:06:23.936 10:28:31 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.936 10:28:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:23.936 ************************************ 00:06:23.936 END TEST thread_poller_perf 00:06:23.936 ************************************ 00:06:23.936 10:28:31 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:23.936 10:28:31 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:23.936 10:28:31 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.936 10:28:31 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.936 ************************************ 00:06:23.936 START TEST thread_poller_perf 00:06:23.936 ************************************ 00:06:23.936 10:28:31 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:23.936 [2024-07-24 10:28:31.372551] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:06:23.936 [2024-07-24 10:28:31.372625] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2054158 ] 00:06:24.194 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.194 [2024-07-24 10:28:31.430457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.194 [2024-07-24 10:28:31.470078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.194 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:25.129 ====================================== 00:06:25.129 busy:2101500828 (cyc) 00:06:25.129 total_run_count: 5599000 00:06:25.129 tsc_hz: 2100000000 (cyc) 00:06:25.129 ====================================== 00:06:25.129 poller_cost: 375 (cyc), 178 (nsec) 00:06:25.129 00:06:25.129 real 0m1.176s 00:06:25.129 user 0m1.100s 00:06:25.129 sys 0m0.071s 00:06:25.129 10:28:32 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.129 10:28:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:25.129 ************************************ 00:06:25.129 END TEST thread_poller_perf 00:06:25.129 ************************************ 00:06:25.129 10:28:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:25.129 00:06:25.129 real 0m2.557s 00:06:25.129 user 0m2.282s 00:06:25.129 sys 0m0.283s 00:06:25.129 10:28:32 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.129 10:28:32 thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.129 ************************************ 00:06:25.129 END TEST thread 00:06:25.129 ************************************ 00:06:25.388 10:28:32 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:25.388 10:28:32 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:06:25.388 10:28:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.388 10:28:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.388 10:28:32 -- common/autotest_common.sh@10 -- # set +x 00:06:25.388 ************************************ 00:06:25.388 START TEST app_cmdline 00:06:25.388 ************************************ 00:06:25.388 10:28:32 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:06:25.388 * Looking for test storage... 00:06:25.388 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:06:25.388 10:28:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:25.388 10:28:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2054446 00:06:25.388 10:28:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2054446 00:06:25.388 10:28:32 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:25.388 10:28:32 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2054446 ']' 00:06:25.388 10:28:32 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.388 10:28:32 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.388 10:28:32 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.388 10:28:32 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.388 10:28:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:25.388 [2024-07-24 10:28:32.756743] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:06:25.388 [2024-07-24 10:28:32.756790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2054446 ] 00:06:25.388 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.388 [2024-07-24 10:28:32.808716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.646 [2024-07-24 10:28:32.849603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.646 10:28:33 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.646 10:28:33 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:25.646 10:28:33 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:25.904 { 00:06:25.904 "version": "SPDK v24.09-pre git sha1 8711e7e9b", 00:06:25.904 "fields": { 00:06:25.904 "major": 24, 00:06:25.904 "minor": 9, 00:06:25.904 "patch": 0, 00:06:25.904 "suffix": "-pre", 00:06:25.904 "commit": "8711e7e9b" 00:06:25.904 } 00:06:25.904 } 00:06:25.904 10:28:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:25.904 10:28:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:25.904 10:28:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:25.904 10:28:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:25.904 10:28:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:25.904 10:28:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:25.904 10:28:33 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.904 10:28:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:25.904 10:28:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:25.904 10:28:33 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.904 10:28:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:25.904 10:28:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:25.904 10:28:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:25.904 10:28:33 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:25.904 10:28:33 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:25.904 10:28:33 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:25.904 10:28:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.904 10:28:33 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:25.904 10:28:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.904 10:28:33 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:25.904 10:28:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.904 10:28:33 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:25.904 10:28:33 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:06:25.904 10:28:33 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:26.162 request: 00:06:26.162 { 00:06:26.162 "method": "env_dpdk_get_mem_stats", 00:06:26.162 "req_id": 1 00:06:26.162 } 00:06:26.162 Got JSON-RPC error response 00:06:26.162 response: 00:06:26.162 { 00:06:26.162 "code": -32601, 00:06:26.162 "message": "Method not found" 00:06:26.162 } 00:06:26.162 10:28:33 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:26.162 10:28:33 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:26.162 10:28:33 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:26.162 10:28:33 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:26.162 10:28:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2054446 00:06:26.162 10:28:33 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2054446 ']' 00:06:26.162 10:28:33 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2054446 00:06:26.162 10:28:33 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:26.162 10:28:33 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:26.162 10:28:33 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2054446 00:06:26.162 10:28:33 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:26.162 10:28:33 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:26.162 10:28:33 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2054446' 00:06:26.162 killing process with pid 2054446 00:06:26.162 10:28:33 app_cmdline -- common/autotest_common.sh@969 -- # kill 2054446 00:06:26.162 10:28:33 app_cmdline -- common/autotest_common.sh@974 -- # wait 2054446 00:06:26.421 00:06:26.421 real 0m1.151s 00:06:26.421 user 0m1.357s 00:06:26.421 sys 0m0.388s 00:06:26.421 10:28:33 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.421 10:28:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:26.421 ************************************ 00:06:26.421 END TEST app_cmdline 00:06:26.421 ************************************ 00:06:26.421 10:28:33 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:06:26.421 10:28:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.421 10:28:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.421 10:28:33 -- common/autotest_common.sh@10 -- # set +x 00:06:26.421 ************************************ 00:06:26.421 START TEST version 00:06:26.421 ************************************ 00:06:26.421 10:28:33 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:06:26.679 * Looking for test storage... 00:06:26.679 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:06:26.680 10:28:33 version -- app/version.sh@17 -- # get_header_version major 00:06:26.680 10:28:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:26.680 10:28:33 version -- app/version.sh@14 -- # cut -f2 00:06:26.680 10:28:33 version -- app/version.sh@14 -- # tr -d '"' 00:06:26.680 10:28:33 version -- app/version.sh@17 -- # major=24 00:06:26.680 10:28:33 version -- app/version.sh@18 -- # get_header_version minor 00:06:26.680 10:28:33 version -- app/version.sh@14 -- # cut -f2 00:06:26.680 10:28:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:26.680 10:28:33 version -- app/version.sh@14 -- # tr -d '"' 00:06:26.680 10:28:33 version -- app/version.sh@18 -- # minor=9 00:06:26.680 10:28:33 version -- app/version.sh@19 -- # get_header_version patch 00:06:26.680 10:28:33 version -- app/version.sh@14 -- # cut -f2 00:06:26.680 10:28:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:26.680 10:28:33 version -- app/version.sh@14 -- # tr -d '"' 00:06:26.680 10:28:33 version -- app/version.sh@19 -- # patch=0 00:06:26.680 10:28:33 version -- app/version.sh@20 -- # get_header_version suffix 00:06:26.680 10:28:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:26.680 10:28:33 version -- app/version.sh@14 -- # cut -f2 00:06:26.680 10:28:33 version -- app/version.sh@14 -- # tr -d '"' 00:06:26.680 10:28:33 version -- app/version.sh@20 -- # suffix=-pre 00:06:26.680 10:28:33 version -- app/version.sh@22 -- # version=24.9 00:06:26.680 10:28:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:26.680 10:28:33 version -- app/version.sh@28 -- # version=24.9rc0 00:06:26.680 10:28:33 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:06:26.680 10:28:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:26.680 10:28:33 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:26.680 10:28:33 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:26.680 00:06:26.680 real 0m0.146s 00:06:26.680 user 0m0.077s 00:06:26.680 sys 0m0.103s 00:06:26.680 10:28:33 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.680 10:28:33 version -- common/autotest_common.sh@10 -- # set +x 00:06:26.680 ************************************ 00:06:26.680 END TEST version 00:06:26.680 ************************************ 00:06:26.680 10:28:34 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:26.680 10:28:34 -- spdk/autotest.sh@202 -- # uname -s 00:06:26.680 10:28:34 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:06:26.680 10:28:34 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:26.680 10:28:34 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:26.680 10:28:34 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:06:26.680 10:28:34 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:26.680 10:28:34 -- spdk/autotest.sh@264 -- # timing_exit lib 00:06:26.680 10:28:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:26.680 10:28:34 -- common/autotest_common.sh@10 -- # set +x 00:06:26.680 10:28:34 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:26.680 10:28:34 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:06:26.680 10:28:34 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:06:26.680 10:28:34 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:06:26.680 10:28:34 -- spdk/autotest.sh@287 -- # '[' rdma = rdma ']' 00:06:26.680 10:28:34 -- spdk/autotest.sh@288 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:26.680 10:28:34 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:26.680 10:28:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.680 10:28:34 -- common/autotest_common.sh@10 -- # set +x 00:06:26.680 ************************************ 00:06:26.680 START TEST nvmf_rdma 00:06:26.680 ************************************ 00:06:26.680 10:28:34 nvmf_rdma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:26.938 * Looking for test storage... 00:06:26.938 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:06:26.938 10:28:34 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:06:26.938 10:28:34 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:26.938 10:28:34 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:06:26.938 10:28:34 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:26.938 10:28:34 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.938 10:28:34 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:26.939 ************************************ 00:06:26.939 START TEST nvmf_target_core 00:06:26.939 ************************************ 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:06:26.939 * Looking for test storage... 00:06:26.939 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:26.939 ************************************ 00:06:26.939 START TEST nvmf_abort 00:06:26.939 ************************************ 00:06:26.939 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:06:27.197 * Looking for test storage... 00:06:27.197 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.197 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:27.198 10:28:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:06:32.468 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:06:32.468 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:06:32.468 Found net devices under 0000:da:00.0: mlx_0_0 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:06:32.468 Found net devices under 0000:da:00.1: mlx_0_1 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:32.468 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:06:32.469 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:32.469 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:06:32.469 altname enp218s0f0np0 00:06:32.469 altname ens818f0np0 00:06:32.469 inet 192.168.100.8/24 scope global mlx_0_0 00:06:32.469 valid_lft forever preferred_lft forever 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:06:32.469 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:32.469 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:06:32.469 altname enp218s0f1np1 00:06:32.469 altname ens818f1np1 00:06:32.469 inet 192.168.100.9/24 scope global mlx_0_1 00:06:32.469 valid_lft forever preferred_lft forever 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:06:32.469 192.168.100.9' 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:06:32.469 192.168.100.9' 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:06:32.469 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:32.728 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:06:32.728 192.168.100.9' 00:06:32.728 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:06:32.728 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:06:32.728 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:32.728 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:06:32.728 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:32.728 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:06:32.728 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:06:32.728 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:06:32.728 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:32.728 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:32.728 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:32.728 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.728 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2057979 00:06:32.728 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2057979 00:06:32.728 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:32.728 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2057979 ']' 00:06:32.728 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.728 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.728 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.728 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.728 10:28:39 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.728 [2024-07-24 10:28:39.989762] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:06:32.728 [2024-07-24 10:28:39.989817] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.728 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.728 [2024-07-24 10:28:40.047265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.728 [2024-07-24 10:28:40.092695] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:32.728 [2024-07-24 10:28:40.092734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:32.728 [2024-07-24 10:28:40.092741] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:32.728 [2024-07-24 10:28:40.092747] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:32.728 [2024-07-24 10:28:40.092753] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:32.728 [2024-07-24 10:28:40.092859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.728 [2024-07-24 10:28:40.092877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.729 [2024-07-24 10:28:40.092878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.988 [2024-07-24 10:28:40.257671] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x180dc90/0x1812140) succeed. 00:06:32.988 [2024-07-24 10:28:40.273690] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x180f1e0/0x18537d0) succeed. 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.988 Malloc0 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.988 Delay0 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.988 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:33.247 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.247 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:06:33.247 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.247 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:33.247 [2024-07-24 10:28:40.449617] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:33.247 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.247 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:33.247 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.247 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:33.247 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.247 10:28:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:33.247 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.247 [2024-07-24 10:28:40.546837] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:35.780 Initializing NVMe Controllers 00:06:35.780 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:06:35.780 controller IO queue size 128 less than required 00:06:35.780 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:35.780 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:35.780 Initialization complete. Launching workers. 00:06:35.780 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 51692 00:06:35.780 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 51753, failed to submit 62 00:06:35.780 success 51693, unsuccess 60, failed 0 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:06:35.780 rmmod nvme_rdma 00:06:35.780 rmmod nvme_fabrics 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2057979 ']' 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2057979 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2057979 ']' 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2057979 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2057979 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2057979' 00:06:35.780 killing process with pid 2057979 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2057979 00:06:35.780 10:28:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2057979 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:06:35.780 00:06:35.780 real 0m8.645s 00:06:35.780 user 0m12.231s 00:06:35.780 sys 0m4.531s 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.780 ************************************ 00:06:35.780 END TEST nvmf_abort 00:06:35.780 ************************************ 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:35.780 ************************************ 00:06:35.780 START TEST nvmf_ns_hotplug_stress 00:06:35.780 ************************************ 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:06:35.780 * Looking for test storage... 00:06:35.780 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.780 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:35.781 10:28:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:06:41.087 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:06:41.087 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:06:41.087 Found net devices under 0000:da:00.0: mlx_0_0 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:06:41.087 Found net devices under 0000:da:00.1: mlx_0_1 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:41.087 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:06:41.347 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:41.347 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:06:41.347 altname enp218s0f0np0 00:06:41.347 altname ens818f0np0 00:06:41.347 inet 192.168.100.8/24 scope global mlx_0_0 00:06:41.347 valid_lft forever preferred_lft forever 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:06:41.347 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:41.347 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:06:41.347 altname enp218s0f1np1 00:06:41.347 altname ens818f1np1 00:06:41.347 inet 192.168.100.9/24 scope global mlx_0_1 00:06:41.347 valid_lft forever preferred_lft forever 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:06:41.347 192.168.100.9' 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:06:41.347 192.168.100.9' 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:06:41.347 192.168.100.9' 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:06:41.347 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:41.348 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:06:41.348 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:41.348 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:06:41.348 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:06:41.348 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:06:41.348 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:41.348 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:41.348 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:41.348 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:41.348 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2061645 00:06:41.348 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:41.348 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2061645 00:06:41.348 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2061645 ']' 00:06:41.348 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.348 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.348 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.348 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.348 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:41.348 [2024-07-24 10:28:48.753181] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:06:41.348 [2024-07-24 10:28:48.753231] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.348 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.606 [2024-07-24 10:28:48.810809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.606 [2024-07-24 10:28:48.854348] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:41.606 [2024-07-24 10:28:48.854391] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:41.606 [2024-07-24 10:28:48.854397] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:41.606 [2024-07-24 10:28:48.854403] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:41.606 [2024-07-24 10:28:48.854408] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:41.606 [2024-07-24 10:28:48.854529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.606 [2024-07-24 10:28:48.854640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.607 [2024-07-24 10:28:48.854641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.607 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.607 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:41.607 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:41.607 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:41.607 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:41.607 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:41.607 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:41.607 10:28:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:41.865 [2024-07-24 10:28:49.155818] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbe7c90/0xbec140) succeed. 00:06:41.865 [2024-07-24 10:28:49.164780] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbe91e0/0xc2d7d0) succeed. 00:06:41.865 10:28:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:42.125 10:28:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:42.416 [2024-07-24 10:28:49.632591] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:42.416 10:28:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:42.416 10:28:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:42.674 Malloc0 00:06:42.674 10:28:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:42.932 Delay0 00:06:42.932 10:28:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.932 10:28:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:43.190 NULL1 00:06:43.190 10:28:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:43.448 10:28:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2061913 00:06:43.448 10:28:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:43.448 10:28:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:06:43.449 10:28:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.449 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.825 Read completed with error (sct=0, sc=11) 00:06:44.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.825 10:28:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.825 10:28:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:44.825 10:28:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:44.825 true 00:06:44.825 10:28:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:06:44.825 10:28:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.760 10:28:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.021 10:28:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:46.021 10:28:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:46.021 true 00:06:46.021 10:28:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:06:46.021 10:28:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.957 10:28:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.215 10:28:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:47.215 10:28:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:47.215 true 00:06:47.215 10:28:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:06:47.215 10:28:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.149 10:28:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.407 10:28:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:48.407 10:28:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:48.407 true 00:06:48.407 10:28:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:06:48.407 10:28:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.343 10:28:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.601 10:28:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:49.601 10:28:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:49.601 true 00:06:49.602 10:28:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:06:49.602 10:28:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.536 10:28:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.793 10:28:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:50.793 10:28:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:50.793 true 00:06:50.793 10:28:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:06:50.793 10:28:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.727 10:28:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.986 10:28:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:51.986 10:28:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:51.986 true 00:06:52.243 10:28:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:06:52.243 10:28:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.067 10:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.067 10:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:53.067 10:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:53.326 true 00:06:53.326 10:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:06:53.326 10:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.262 10:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.262 10:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:54.262 10:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:54.520 true 00:06:54.520 10:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:06:54.520 10:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.456 10:29:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.456 10:29:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:55.456 10:29:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:55.715 true 00:06:55.715 10:29:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:06:55.715 10:29:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.650 10:29:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.650 10:29:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:56.650 10:29:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:56.908 true 00:06:56.908 10:29:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:06:56.908 10:29:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.843 10:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.843 10:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:57.843 10:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:58.101 true 00:06:58.101 10:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:06:58.101 10:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.101 10:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.359 10:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:58.359 10:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:58.618 true 00:06:58.618 10:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:06:58.618 10:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.994 10:29:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.995 10:29:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:59.995 10:29:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:59.995 true 00:06:59.995 10:29:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:06:59.995 10:29:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.930 10:29:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.188 10:29:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:01.188 10:29:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:01.188 true 00:07:01.188 10:29:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:07:01.188 10:29:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.188 10:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.188 10:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:02.188 10:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:02.460 true 00:07:02.460 10:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:07:02.460 10:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.394 10:29:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.394 10:29:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:03.394 10:29:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:03.652 true 00:07:03.652 10:29:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:07:03.652 10:29:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.587 10:29:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.587 10:29:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:04.587 10:29:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:04.846 true 00:07:04.846 10:29:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:07:04.846 10:29:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.780 10:29:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.780 10:29:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:05.780 10:29:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:06.038 true 00:07:06.038 10:29:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:07:06.038 10:29:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.973 10:29:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.236 10:29:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:07.236 10:29:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:07.236 true 00:07:07.236 10:29:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:07:07.236 10:29:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.171 10:29:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.430 10:29:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:08.430 10:29:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:08.430 true 00:07:08.430 10:29:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:07:08.688 10:29:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.255 10:29:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.513 10:29:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:09.513 10:29:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:09.772 true 00:07:09.772 10:29:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:07:09.772 10:29:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.708 10:29:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.708 10:29:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:10.708 10:29:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:10.967 true 00:07:10.967 10:29:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:07:10.967 10:29:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.903 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.903 10:29:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.903 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.903 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.903 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.903 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.903 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.903 10:29:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:11.903 10:29:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:11.903 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.162 true 00:07:12.162 10:29:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:07:12.162 10:29:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.099 10:29:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.099 10:29:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:13.099 10:29:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:13.357 true 00:07:13.357 10:29:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:07:13.357 10:29:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.293 10:29:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.293 10:29:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:14.293 10:29:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:14.552 true 00:07:14.552 10:29:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:07:14.552 10:29:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.809 10:29:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.067 10:29:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:15.067 10:29:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:15.067 true 00:07:15.067 10:29:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:07:15.067 10:29:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.325 10:29:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.582 10:29:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:15.582 10:29:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:15.582 true 00:07:15.582 10:29:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:07:15.582 10:29:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.840 10:29:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.097 10:29:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:16.097 10:29:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:16.097 true 00:07:16.097 10:29:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:07:16.098 10:29:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.356 Initializing NVMe Controllers 00:07:16.356 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:16.356 Controller IO queue size 128, less than required. 00:07:16.356 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:16.356 Controller IO queue size 128, less than required. 00:07:16.356 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:16.356 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:16.356 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:16.356 Initialization complete. Launching workers. 00:07:16.356 ======================================================== 00:07:16.356 Latency(us) 00:07:16.356 Device Information : IOPS MiB/s Average min max 00:07:16.356 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5568.90 2.72 20582.22 871.48 1137867.81 00:07:16.356 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 33879.30 16.54 3778.00 2136.59 293825.83 00:07:16.356 ======================================================== 00:07:16.356 Total : 39448.20 19.26 6150.25 871.48 1137867.81 00:07:16.356 00:07:16.356 10:29:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.613 10:29:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:16.613 10:29:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:16.872 true 00:07:16.872 10:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2061913 00:07:16.872 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2061913) - No such process 00:07:16.872 10:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2061913 00:07:16.872 10:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.872 10:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:17.130 10:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:17.130 10:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:17.130 10:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:17.130 10:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.130 10:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:17.388 null0 00:07:17.388 10:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.388 10:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.388 10:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:17.388 null1 00:07:17.388 10:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.388 10:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.388 10:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:17.646 null2 00:07:17.646 10:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.646 10:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.646 10:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:17.905 null3 00:07:17.905 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.905 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.905 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:17.905 null4 00:07:18.163 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:18.163 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:18.163 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:18.163 null5 00:07:18.163 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:18.163 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:18.163 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:18.421 null6 00:07:18.422 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:18.422 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:18.422 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:18.681 null7 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2067969 2067970 2067972 2067973 2067975 2067977 2067979 2067981 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.681 10:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:18.681 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.681 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:18.682 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:18.682 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:18.682 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:18.682 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:18.682 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:18.682 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:18.940 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.940 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.940 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:18.940 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.940 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.940 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:18.940 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.940 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.940 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:18.940 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.940 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.940 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:18.940 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.940 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.940 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:18.940 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.940 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.940 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:18.940 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.940 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.940 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:18.940 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.941 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.941 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:19.199 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:19.199 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:19.199 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:19.199 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:19.199 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.199 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:19.199 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:19.199 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:19.458 10:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:19.717 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.717 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.717 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:19.717 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.717 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.717 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.717 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.717 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:19.717 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:19.717 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.717 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.717 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:19.717 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.717 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.717 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.717 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.718 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:19.718 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:19.718 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.718 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.718 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:19.718 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.718 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.718 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.977 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.236 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.236 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.236 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.236 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.236 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.236 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.236 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.236 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.236 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:20.236 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.236 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.236 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:20.236 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:20.236 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:20.236 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.494 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.494 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.494 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.494 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.494 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.494 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.494 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.494 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.494 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.495 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.495 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.495 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.495 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.495 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.495 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.495 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.495 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.495 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.495 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.495 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.495 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:20.495 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.495 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.495 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.495 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.753 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:20.753 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.753 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:20.753 10:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.753 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.012 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.012 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.012 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.012 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.012 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.012 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.012 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.012 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.012 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.012 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.012 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.270 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.569 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.569 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.569 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.569 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.569 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.569 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.569 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.569 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.569 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.569 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.569 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.569 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.569 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.569 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.569 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.570 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.570 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.570 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.570 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.570 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.570 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.570 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.570 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.570 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.570 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.570 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.570 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.570 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.570 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.570 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.570 10:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.884 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:22.142 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:22.142 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:22.142 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:22.142 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:22.142 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.142 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.142 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:22.142 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:22.142 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.143 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:22.401 rmmod nvme_rdma 00:07:22.401 rmmod nvme_fabrics 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:22.401 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:22.402 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:22.402 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2061645 ']' 00:07:22.402 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2061645 00:07:22.402 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2061645 ']' 00:07:22.402 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2061645 00:07:22.402 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:22.402 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.402 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2061645 00:07:22.402 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:22.402 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:22.402 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2061645' 00:07:22.402 killing process with pid 2061645 00:07:22.402 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2061645 00:07:22.402 10:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2061645 00:07:22.660 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:22.660 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:22.661 00:07:22.661 real 0m46.981s 00:07:22.661 user 3m17.950s 00:07:22.661 sys 0m11.816s 00:07:22.661 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.661 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:22.661 ************************************ 00:07:22.661 END TEST nvmf_ns_hotplug_stress 00:07:22.661 ************************************ 00:07:22.661 10:29:30 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:07:22.661 10:29:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:22.661 10:29:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.661 10:29:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:22.661 ************************************ 00:07:22.661 START TEST nvmf_delete_subsystem 00:07:22.661 ************************************ 00:07:22.661 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:07:22.920 * Looking for test storage... 00:07:22.920 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:22.920 10:29:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.188 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:28.188 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:28.188 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:28.188 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:28.188 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:28.188 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:28.188 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:28.188 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:28.188 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:28.188 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:28.188 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:28.188 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:28.188 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:28.188 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:28.188 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:28.188 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:28.189 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:28.189 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:28.189 Found net devices under 0000:da:00.0: mlx_0_0 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:28.189 Found net devices under 0000:da:00.1: mlx_0_1 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:28.189 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:28.189 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:07:28.189 altname enp218s0f0np0 00:07:28.189 altname ens818f0np0 00:07:28.189 inet 192.168.100.8/24 scope global mlx_0_0 00:07:28.189 valid_lft forever preferred_lft forever 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:28.189 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:28.190 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:28.190 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:07:28.190 altname enp218s0f1np1 00:07:28.190 altname ens818f1np1 00:07:28.190 inet 192.168.100.9/24 scope global mlx_0_1 00:07:28.190 valid_lft forever preferred_lft forever 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:28.190 192.168.100.9' 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:28.190 192.168.100.9' 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:07:28.190 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:28.448 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:28.448 192.168.100.9' 00:07:28.448 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:07:28.448 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:07:28.448 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:28.448 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:28.448 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:28.448 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:28.448 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:28.448 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:28.448 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:28.448 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:28.448 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:28.448 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.448 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2071893 00:07:28.448 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2071893 00:07:28.448 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:28.448 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2071893 ']' 00:07:28.448 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.448 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.448 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.448 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.448 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.448 [2024-07-24 10:29:35.729904] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:07:28.448 [2024-07-24 10:29:35.729954] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.448 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.448 [2024-07-24 10:29:35.786150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:28.448 [2024-07-24 10:29:35.828852] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.448 [2024-07-24 10:29:35.828894] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.448 [2024-07-24 10:29:35.828901] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.448 [2024-07-24 10:29:35.828906] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.448 [2024-07-24 10:29:35.828911] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.448 [2024-07-24 10:29:35.829005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.448 [2024-07-24 10:29:35.829007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.705 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.705 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:28.705 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:28.705 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:28.705 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.705 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.705 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:28.705 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.705 10:29:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.705 [2024-07-24 10:29:35.973454] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa3ee60/0xa43310) succeed. 00:07:28.705 [2024-07-24 10:29:35.982341] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa40310/0xa849a0) succeed. 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.705 [2024-07-24 10:29:36.074817] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.705 NULL1 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.705 Delay0 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2071976 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:28.705 10:29:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:28.705 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.962 [2024-07-24 10:29:36.167772] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:30.865 10:29:38 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:30.865 10:29:38 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.865 10:29:38 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:31.798 NVMe io qpair process completion error 00:07:31.798 NVMe io qpair process completion error 00:07:31.798 NVMe io qpair process completion error 00:07:31.798 NVMe io qpair process completion error 00:07:31.798 NVMe io qpair process completion error 00:07:31.798 NVMe io qpair process completion error 00:07:31.798 10:29:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.798 10:29:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:31.798 10:29:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2071976 00:07:31.798 10:29:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:32.362 10:29:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:32.362 10:29:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2071976 00:07:32.362 10:29:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Write completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Write completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Write completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Write completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Write completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Write completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Write completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Write completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Write completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Write completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Write completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Write completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Write completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.928 starting I/O failed: -6 00:07:32.928 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 starting I/O failed: -6 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Write completed with error (sct=0, sc=8) 00:07:32.929 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Write completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Write completed with error (sct=0, sc=8) 00:07:32.930 Write completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Write completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Write completed with error (sct=0, sc=8) 00:07:32.930 Write completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Write completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Write completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Write completed with error (sct=0, sc=8) 00:07:32.930 Write completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Write completed with error (sct=0, sc=8) 00:07:32.930 Write completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Write completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Write completed with error (sct=0, sc=8) 00:07:32.930 Read completed with error (sct=0, sc=8) 00:07:32.930 Initializing NVMe Controllers 00:07:32.930 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:32.930 Controller IO queue size 128, less than required. 00:07:32.930 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:32.930 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:32.930 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:32.930 Initialization complete. Launching workers. 00:07:32.930 ======================================================== 00:07:32.930 Latency(us) 00:07:32.930 Device Information : IOPS MiB/s Average min max 00:07:32.930 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.47 0.04 1594048.93 1000191.97 2977000.48 00:07:32.930 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.47 0.04 1595301.03 1001082.64 2977433.60 00:07:32.930 ======================================================== 00:07:32.930 Total : 160.94 0.08 1594674.98 1000191.97 2977433.60 00:07:32.930 00:07:32.930 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:32.930 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2071976 00:07:32.930 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:32.930 [2024-07-24 10:29:40.265442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:07:32.930 [2024-07-24 10:29:40.265480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:07:32.930 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2071976 00:07:33.494 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2071976) - No such process 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2071976 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2071976 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2071976 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.494 [2024-07-24 10:29:40.785300] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2072835 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2072835 00:07:33.494 10:29:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:33.494 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.494 [2024-07-24 10:29:40.861990] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:34.058 10:29:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:34.058 10:29:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2072835 00:07:34.058 10:29:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:34.622 10:29:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:34.622 10:29:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2072835 00:07:34.622 10:29:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:34.878 10:29:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:34.878 10:29:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2072835 00:07:34.878 10:29:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:35.444 10:29:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:35.444 10:29:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2072835 00:07:35.444 10:29:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:36.010 10:29:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:36.010 10:29:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2072835 00:07:36.010 10:29:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:36.576 10:29:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:36.576 10:29:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2072835 00:07:36.576 10:29:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:37.142 10:29:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:37.142 10:29:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2072835 00:07:37.142 10:29:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:37.400 10:29:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:37.400 10:29:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2072835 00:07:37.400 10:29:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:37.967 10:29:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:37.967 10:29:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2072835 00:07:37.967 10:29:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:38.533 10:29:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:38.533 10:29:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2072835 00:07:38.533 10:29:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:39.100 10:29:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:39.100 10:29:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2072835 00:07:39.100 10:29:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:39.667 10:29:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:39.667 10:29:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2072835 00:07:39.667 10:29:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:39.924 10:29:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:39.924 10:29:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2072835 00:07:39.924 10:29:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:40.490 10:29:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:40.490 10:29:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2072835 00:07:40.490 10:29:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:40.749 Initializing NVMe Controllers 00:07:40.749 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:40.749 Controller IO queue size 128, less than required. 00:07:40.749 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:40.749 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:40.749 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:40.749 Initialization complete. Launching workers. 00:07:40.749 ======================================================== 00:07:40.749 Latency(us) 00:07:40.749 Device Information : IOPS MiB/s Average min max 00:07:40.749 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001308.40 1000059.12 1004449.73 00:07:40.749 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002404.98 1000097.00 1006948.33 00:07:40.749 ======================================================== 00:07:40.749 Total : 256.00 0.12 1001856.69 1000059.12 1006948.33 00:07:40.749 00:07:41.007 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:41.007 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2072835 00:07:41.007 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2072835) - No such process 00:07:41.007 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2072835 00:07:41.007 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:41.007 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:41.007 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:41.007 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:41.007 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:41.007 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:41.007 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:41.007 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:41.007 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:41.007 rmmod nvme_rdma 00:07:41.007 rmmod nvme_fabrics 00:07:41.007 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:41.007 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:41.007 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:41.007 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2071893 ']' 00:07:41.007 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2071893 00:07:41.007 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2071893 ']' 00:07:41.007 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2071893 00:07:41.007 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:41.007 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:41.007 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2071893 00:07:41.265 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:41.265 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:41.265 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2071893' 00:07:41.265 killing process with pid 2071893 00:07:41.265 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2071893 00:07:41.265 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2071893 00:07:41.265 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:41.265 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:41.265 00:07:41.265 real 0m18.579s 00:07:41.265 user 0m48.437s 00:07:41.265 sys 0m5.113s 00:07:41.265 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.265 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.265 ************************************ 00:07:41.265 END TEST nvmf_delete_subsystem 00:07:41.265 ************************************ 00:07:41.524 10:29:48 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:07:41.524 10:29:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:41.524 10:29:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.524 10:29:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:41.524 ************************************ 00:07:41.524 START TEST nvmf_host_management 00:07:41.524 ************************************ 00:07:41.524 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:07:41.524 * Looking for test storage... 00:07:41.524 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:41.524 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.524 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:41.524 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.524 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.524 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:41.525 10:29:48 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:46.789 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:46.790 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:46.790 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:46.790 Found net devices under 0000:da:00.0: mlx_0_0 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:46.790 Found net devices under 0000:da:00.1: mlx_0_1 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:46.790 10:29:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:46.790 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:46.790 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:07:46.790 altname enp218s0f0np0 00:07:46.790 altname ens818f0np0 00:07:46.790 inet 192.168.100.8/24 scope global mlx_0_0 00:07:46.790 valid_lft forever preferred_lft forever 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:46.790 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:46.790 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:07:46.790 altname enp218s0f1np1 00:07:46.790 altname ens818f1np1 00:07:46.790 inet 192.168.100.9/24 scope global mlx_0_1 00:07:46.790 valid_lft forever preferred_lft forever 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:46.790 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:46.791 192.168.100.9' 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:46.791 192.168.100.9' 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:46.791 192.168.100.9' 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2077163 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2077163 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2077163 ']' 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.791 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.791 [2024-07-24 10:29:54.198699] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:07:46.791 [2024-07-24 10:29:54.198751] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.791 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.048 [2024-07-24 10:29:54.256341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.048 [2024-07-24 10:29:54.300889] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.048 [2024-07-24 10:29:54.300931] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.048 [2024-07-24 10:29:54.300937] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.048 [2024-07-24 10:29:54.300943] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.048 [2024-07-24 10:29:54.300948] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.048 [2024-07-24 10:29:54.301068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.048 [2024-07-24 10:29:54.301168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.048 [2024-07-24 10:29:54.301273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.048 [2024-07-24 10:29:54.301275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:47.048 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.048 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:47.048 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:47.048 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:47.048 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.048 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.048 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:47.048 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.048 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.048 [2024-07-24 10:29:54.466404] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d38ac0/0x1d3cfb0) succeed. 00:07:47.048 [2024-07-24 10:29:54.475541] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d3a0b0/0x1d7e640) succeed. 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.308 Malloc0 00:07:47.308 [2024-07-24 10:29:54.650882] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2077332 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2077332 /var/tmp/bdevperf.sock 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2077332 ']' 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:47.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:47.308 { 00:07:47.308 "params": { 00:07:47.308 "name": "Nvme$subsystem", 00:07:47.308 "trtype": "$TEST_TRANSPORT", 00:07:47.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:47.308 "adrfam": "ipv4", 00:07:47.308 "trsvcid": "$NVMF_PORT", 00:07:47.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:47.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:47.308 "hdgst": ${hdgst:-false}, 00:07:47.308 "ddgst": ${ddgst:-false} 00:07:47.308 }, 00:07:47.308 "method": "bdev_nvme_attach_controller" 00:07:47.308 } 00:07:47.308 EOF 00:07:47.308 )") 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:47.308 10:29:54 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:47.308 "params": { 00:07:47.308 "name": "Nvme0", 00:07:47.308 "trtype": "rdma", 00:07:47.308 "traddr": "192.168.100.8", 00:07:47.308 "adrfam": "ipv4", 00:07:47.308 "trsvcid": "4420", 00:07:47.308 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:47.308 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:47.308 "hdgst": false, 00:07:47.308 "ddgst": false 00:07:47.308 }, 00:07:47.308 "method": "bdev_nvme_attach_controller" 00:07:47.308 }' 00:07:47.308 [2024-07-24 10:29:54.742631] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:07:47.308 [2024-07-24 10:29:54.742677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2077332 ] 00:07:47.638 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.638 [2024-07-24 10:29:54.797554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.638 [2024-07-24 10:29:54.837473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.638 Running I/O for 10 seconds... 00:07:47.638 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.638 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:47.638 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:47.638 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.638 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=173 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 173 -ge 100 ']' 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.897 [2024-07-24 10:29:55.137920] rdma.c: 864:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 3 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.897 10:29:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:48.833 [2024-07-24 10:29:56.143276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x182600 00:07:48.833 [2024-07-24 10:29:56.143305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.833 [2024-07-24 10:29:56.143324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x182600 00:07:48.833 [2024-07-24 10:29:56.143332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.833 [2024-07-24 10:29:56.143340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0x182600 00:07:48.833 [2024-07-24 10:29:56.143347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.833 [2024-07-24 10:29:56.143354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x182600 00:07:48.833 [2024-07-24 10:29:56.143361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.833 [2024-07-24 10:29:56.143369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x182600 00:07:48.834 [2024-07-24 10:29:56.143375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x182600 00:07:48.834 [2024-07-24 10:29:56.143389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182600 00:07:48.834 [2024-07-24 10:29:56.143404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x182600 00:07:48.834 [2024-07-24 10:29:56.143418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x182600 00:07:48.834 [2024-07-24 10:29:56.143432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x182600 00:07:48.834 [2024-07-24 10:29:56.143446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x182600 00:07:48.834 [2024-07-24 10:29:56.143461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x182600 00:07:48.834 [2024-07-24 10:29:56.143475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x182600 00:07:48.834 [2024-07-24 10:29:56.143494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x182100 00:07:48.834 [2024-07-24 10:29:56.143509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x182100 00:07:48.834 [2024-07-24 10:29:56.143523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x182100 00:07:48.834 [2024-07-24 10:29:56.143537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x182100 00:07:48.834 [2024-07-24 10:29:56.143551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x182100 00:07:48.834 [2024-07-24 10:29:56.143566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x182500 00:07:48.834 [2024-07-24 10:29:56.143580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182500 00:07:48.834 [2024-07-24 10:29:56.143594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182500 00:07:48.834 [2024-07-24 10:29:56.143609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x182500 00:07:48.834 [2024-07-24 10:29:56.143624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182500 00:07:48.834 [2024-07-24 10:29:56.143638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x182500 00:07:48.834 [2024-07-24 10:29:56.143654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x182500 00:07:48.834 [2024-07-24 10:29:56.143668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x182500 00:07:48.834 [2024-07-24 10:29:56.143683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182500 00:07:48.834 [2024-07-24 10:29:56.143697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x182500 00:07:48.834 [2024-07-24 10:29:56.143712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182500 00:07:48.834 [2024-07-24 10:29:56.143727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x182500 00:07:48.834 [2024-07-24 10:29:56.143741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182500 00:07:48.834 [2024-07-24 10:29:56.143755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182500 00:07:48.834 [2024-07-24 10:29:56.143768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182500 00:07:48.834 [2024-07-24 10:29:56.143782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182800 00:07:48.834 [2024-07-24 10:29:56.143800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182800 00:07:48.834 [2024-07-24 10:29:56.143819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182800 00:07:48.834 [2024-07-24 10:29:56.143834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.834 [2024-07-24 10:29:56.143842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182800 00:07:48.834 [2024-07-24 10:29:56.143849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.143856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182800 00:07:48.835 [2024-07-24 10:29:56.143863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.143870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182800 00:07:48.835 [2024-07-24 10:29:56.143877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.143885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182800 00:07:48.835 [2024-07-24 10:29:56.143891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.143899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182800 00:07:48.835 [2024-07-24 10:29:56.143906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.143914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182800 00:07:48.835 [2024-07-24 10:29:56.143920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.143928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182800 00:07:48.835 [2024-07-24 10:29:56.143934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.143942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x182800 00:07:48.835 [2024-07-24 10:29:56.143948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.143956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182800 00:07:48.835 [2024-07-24 10:29:56.143963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.143971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182800 00:07:48.835 [2024-07-24 10:29:56.143978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.143986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x182800 00:07:48.835 [2024-07-24 10:29:56.143992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.143999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182700 00:07:48.835 [2024-07-24 10:29:56.144006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.144014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182700 00:07:48.835 [2024-07-24 10:29:56.144020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.144028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182700 00:07:48.835 [2024-07-24 10:29:56.144034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.144042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182700 00:07:48.835 [2024-07-24 10:29:56.144048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.144056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182700 00:07:48.835 [2024-07-24 10:29:56.144062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.144070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182700 00:07:48.835 [2024-07-24 10:29:56.144076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.144084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x182700 00:07:48.835 [2024-07-24 10:29:56.144090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.144098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182700 00:07:48.835 [2024-07-24 10:29:56.144104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.144112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x182700 00:07:48.835 [2024-07-24 10:29:56.144118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.144126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182700 00:07:48.835 [2024-07-24 10:29:56.144134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.144142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182700 00:07:48.835 [2024-07-24 10:29:56.144148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.144155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182700 00:07:48.835 [2024-07-24 10:29:56.144162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.144170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x182700 00:07:48.835 [2024-07-24 10:29:56.144176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.144184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x182700 00:07:48.835 [2024-07-24 10:29:56.144190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.144197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x182700 00:07:48.835 [2024-07-24 10:29:56.144203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.144211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x182600 00:07:48.835 [2024-07-24 10:29:56.144218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 [2024-07-24 10:29:56.144225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x182600 00:07:48.835 [2024-07-24 10:29:56.144232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:55728 cdw0:bc6cd000 sqhd:0950 p:1 m:0 dnr:0 00:07:48.835 10:29:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2077332 00:07:48.835 10:29:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:48.835 10:29:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:48.835 10:29:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:48.835 10:29:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:48.835 10:29:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:48.835 10:29:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:48.835 10:29:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:48.835 { 00:07:48.835 "params": { 00:07:48.835 "name": "Nvme$subsystem", 00:07:48.835 "trtype": "$TEST_TRANSPORT", 00:07:48.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.835 "adrfam": "ipv4", 00:07:48.835 "trsvcid": "$NVMF_PORT", 00:07:48.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.835 "hdgst": ${hdgst:-false}, 00:07:48.836 "ddgst": ${ddgst:-false} 00:07:48.836 }, 00:07:48.836 "method": "bdev_nvme_attach_controller" 00:07:48.836 } 00:07:48.836 EOF 00:07:48.836 )") 00:07:48.836 10:29:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:48.836 10:29:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:48.836 10:29:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:48.836 10:29:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:48.836 "params": { 00:07:48.836 "name": "Nvme0", 00:07:48.836 "trtype": "rdma", 00:07:48.836 "traddr": "192.168.100.8", 00:07:48.836 "adrfam": "ipv4", 00:07:48.836 "trsvcid": "4420", 00:07:48.836 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:48.836 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:48.836 "hdgst": false, 00:07:48.836 "ddgst": false 00:07:48.836 }, 00:07:48.836 "method": "bdev_nvme_attach_controller" 00:07:48.836 }' 00:07:48.836 [2024-07-24 10:29:56.189862] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:07:48.836 [2024-07-24 10:29:56.189910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2077578 ] 00:07:48.836 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.836 [2024-07-24 10:29:56.244158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.836 [2024-07-24 10:29:56.284494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.095 Running I/O for 1 seconds... 00:07:50.031 00:07:50.031 Latency(us) 00:07:50.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.031 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:50.031 Verification LBA range: start 0x0 length 0x400 00:07:50.031 Nvme0n1 : 1.01 3017.32 188.58 0.00 0.00 20777.15 655.36 43191.34 00:07:50.031 =================================================================================================================== 00:07:50.031 Total : 3017.32 188.58 0.00 0.00 20777.15 655.36 43191.34 00:07:50.289 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2077332 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:07:50.289 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:50.289 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:50.289 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:50.289 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:50.289 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:50.289 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:50.289 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:50.289 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:50.289 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:50.289 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:50.289 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:50.289 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:50.289 rmmod nvme_rdma 00:07:50.289 rmmod nvme_fabrics 00:07:50.289 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:50.289 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:50.289 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:50.289 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2077163 ']' 00:07:50.289 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2077163 00:07:50.289 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2077163 ']' 00:07:50.289 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2077163 00:07:50.289 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:50.289 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:50.289 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2077163 00:07:50.548 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:50.548 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:50.548 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2077163' 00:07:50.548 killing process with pid 2077163 00:07:50.548 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2077163 00:07:50.548 10:29:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2077163 00:07:50.807 [2024-07-24 10:29:58.018756] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:50.807 00:07:50.807 real 0m9.280s 00:07:50.807 user 0m19.008s 00:07:50.807 sys 0m4.752s 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.807 ************************************ 00:07:50.807 END TEST nvmf_host_management 00:07:50.807 ************************************ 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:50.807 ************************************ 00:07:50.807 START TEST nvmf_lvol 00:07:50.807 ************************************ 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:07:50.807 * Looking for test storage... 00:07:50.807 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.807 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:07:50.808 10:29:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:56.076 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.076 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:07:56.076 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:56.076 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:56.076 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:56.076 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:56.076 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:56.076 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:07:56.076 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:56.076 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:07:56.076 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:07:56.076 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:07:56.076 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:07:56.076 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:07:56.076 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:07:56.076 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.076 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.076 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.076 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.076 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.076 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:56.077 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:56.077 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:56.077 Found net devices under 0000:da:00.0: mlx_0_0 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:56.077 Found net devices under 0000:da:00.1: mlx_0_1 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:56.077 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:56.077 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:07:56.077 altname enp218s0f0np0 00:07:56.077 altname ens818f0np0 00:07:56.077 inet 192.168.100.8/24 scope global mlx_0_0 00:07:56.077 valid_lft forever preferred_lft forever 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:56.077 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:56.077 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:07:56.077 altname enp218s0f1np1 00:07:56.077 altname ens818f1np1 00:07:56.077 inet 192.168.100.9/24 scope global mlx_0_1 00:07:56.077 valid_lft forever preferred_lft forever 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:56.077 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:56.078 192.168.100.9' 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:56.078 192.168.100.9' 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:56.078 192.168.100.9' 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:56.078 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:56.336 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:56.336 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:56.336 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:56.336 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:56.336 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2081007 00:07:56.336 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2081007 00:07:56.336 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:56.336 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2081007 ']' 00:07:56.336 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.336 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.336 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.336 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.336 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:56.336 [2024-07-24 10:30:03.582000] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:07:56.337 [2024-07-24 10:30:03.582044] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.337 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.337 [2024-07-24 10:30:03.635937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:56.337 [2024-07-24 10:30:03.676679] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.337 [2024-07-24 10:30:03.676719] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.337 [2024-07-24 10:30:03.676726] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.337 [2024-07-24 10:30:03.676731] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.337 [2024-07-24 10:30:03.676736] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.337 [2024-07-24 10:30:03.676791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.337 [2024-07-24 10:30:03.676888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.337 [2024-07-24 10:30:03.676889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.337 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:56.337 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:56.337 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:56.337 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:56.337 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:56.595 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.596 10:30:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:56.596 [2024-07-24 10:30:03.986012] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf1c990/0xf20e40) succeed. 00:07:56.596 [2024-07-24 10:30:03.995098] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf1dee0/0xf624d0) succeed. 00:07:56.854 10:30:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:56.854 10:30:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:56.854 10:30:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:57.113 10:30:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:57.113 10:30:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:57.371 10:30:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:57.629 10:30:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c6486b1a-14e3-4cb7-aeda-38ec25cf5b6a 00:07:57.629 10:30:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c6486b1a-14e3-4cb7-aeda-38ec25cf5b6a lvol 20 00:07:57.629 10:30:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ba9e65ce-d540-47b5-b204-6d9d3688560b 00:07:57.629 10:30:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:57.886 10:30:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ba9e65ce-d540-47b5-b204-6d9d3688560b 00:07:58.143 10:30:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:58.143 [2024-07-24 10:30:05.559311] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:58.143 10:30:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:58.401 10:30:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2081494 00:07:58.401 10:30:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:58.401 10:30:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:58.401 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.336 10:30:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ba9e65ce-d540-47b5-b204-6d9d3688560b MY_SNAPSHOT 00:07:59.595 10:30:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e60d735a-fc3d-4fb6-a937-3f97ca82a1ac 00:07:59.595 10:30:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ba9e65ce-d540-47b5-b204-6d9d3688560b 30 00:07:59.853 10:30:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e60d735a-fc3d-4fb6-a937-3f97ca82a1ac MY_CLONE 00:08:00.112 10:30:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=cb7d1300-1d2f-4c64-a250-0ae5c190e17f 00:08:00.112 10:30:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate cb7d1300-1d2f-4c64-a250-0ae5c190e17f 00:08:00.112 10:30:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2081494 00:08:10.088 Initializing NVMe Controllers 00:08:10.088 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:08:10.088 Controller IO queue size 128, less than required. 00:08:10.088 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:10.088 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:10.088 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:10.088 Initialization complete. Launching workers. 00:08:10.088 ======================================================== 00:08:10.088 Latency(us) 00:08:10.088 Device Information : IOPS MiB/s Average min max 00:08:10.088 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16775.70 65.53 7632.61 2039.70 54478.98 00:08:10.088 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16808.70 65.66 7617.20 2494.86 58230.75 00:08:10.088 ======================================================== 00:08:10.088 Total : 33584.40 131.19 7624.90 2039.70 58230.75 00:08:10.088 00:08:10.088 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:10.088 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ba9e65ce-d540-47b5-b204-6d9d3688560b 00:08:10.088 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c6486b1a-14e3-4cb7-aeda-38ec25cf5b6a 00:08:10.346 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:10.346 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:10.346 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:10.346 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:10.347 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:10.347 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:10.347 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:10.347 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:10.347 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:10.347 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:10.347 rmmod nvme_rdma 00:08:10.347 rmmod nvme_fabrics 00:08:10.347 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:10.347 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:10.347 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:10.347 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2081007 ']' 00:08:10.347 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2081007 00:08:10.347 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2081007 ']' 00:08:10.347 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2081007 00:08:10.347 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:10.347 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:10.347 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2081007 00:08:10.605 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:10.605 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:10.605 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2081007' 00:08:10.605 killing process with pid 2081007 00:08:10.605 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2081007 00:08:10.605 10:30:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2081007 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:10.865 00:08:10.865 real 0m19.993s 00:08:10.865 user 1m9.147s 00:08:10.865 sys 0m5.096s 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:10.865 ************************************ 00:08:10.865 END TEST nvmf_lvol 00:08:10.865 ************************************ 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:10.865 ************************************ 00:08:10.865 START TEST nvmf_lvs_grow 00:08:10.865 ************************************ 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:08:10.865 * Looking for test storage... 00:08:10.865 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.865 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:10.866 10:30:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.136 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.136 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:16.136 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:16.136 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:16.136 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:16.136 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:16.136 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:16.136 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:16.136 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:16.136 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:16.136 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:16.136 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:16.137 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:16.137 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:16.137 Found net devices under 0000:da:00.0: mlx_0_0 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:16.137 Found net devices under 0000:da:00.1: mlx_0_1 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.137 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:16.396 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:16.396 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:08:16.396 altname enp218s0f0np0 00:08:16.396 altname ens818f0np0 00:08:16.396 inet 192.168.100.8/24 scope global mlx_0_0 00:08:16.396 valid_lft forever preferred_lft forever 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:16.396 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:16.396 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:08:16.396 altname enp218s0f1np1 00:08:16.396 altname ens818f1np1 00:08:16.396 inet 192.168.100.9/24 scope global mlx_0_1 00:08:16.396 valid_lft forever preferred_lft forever 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:16.396 192.168.100.9' 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:16.396 192.168.100.9' 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:16.396 192.168.100.9' 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2087027 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2087027 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2087027 ']' 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.396 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.397 [2024-07-24 10:30:23.774418] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:08:16.397 [2024-07-24 10:30:23.774473] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.397 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.397 [2024-07-24 10:30:23.830866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.655 [2024-07-24 10:30:23.874704] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.655 [2024-07-24 10:30:23.874741] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.655 [2024-07-24 10:30:23.874748] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.655 [2024-07-24 10:30:23.874754] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.655 [2024-07-24 10:30:23.874760] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.655 [2024-07-24 10:30:23.874781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.655 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:16.655 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:16.655 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:16.655 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:16.655 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.655 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.655 10:30:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:16.914 [2024-07-24 10:30:24.175135] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x598240/0x59c6f0) succeed. 00:08:16.914 [2024-07-24 10:30:24.183646] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5996f0/0x5ddd80) succeed. 00:08:16.914 10:30:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:16.914 10:30:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:16.914 10:30:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.914 10:30:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.914 ************************************ 00:08:16.914 START TEST lvs_grow_clean 00:08:16.914 ************************************ 00:08:16.914 10:30:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:16.914 10:30:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:16.914 10:30:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:16.914 10:30:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:16.914 10:30:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:16.914 10:30:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:16.914 10:30:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:16.914 10:30:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:16.914 10:30:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:16.914 10:30:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:17.172 10:30:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:17.172 10:30:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:17.430 10:30:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=33c58697-2c26-4769-9005-30039e0f9e51 00:08:17.430 10:30:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:17.430 10:30:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33c58697-2c26-4769-9005-30039e0f9e51 00:08:17.430 10:30:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:17.430 10:30:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:17.430 10:30:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 33c58697-2c26-4769-9005-30039e0f9e51 lvol 150 00:08:17.689 10:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f403bbf1-e301-4972-9523-7e6b76354c5e 00:08:17.689 10:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:17.689 10:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:17.948 [2024-07-24 10:30:25.180845] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:17.948 [2024-07-24 10:30:25.180895] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:17.948 true 00:08:17.948 10:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33c58697-2c26-4769-9005-30039e0f9e51 00:08:17.948 10:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:17.948 10:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:17.948 10:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:18.206 10:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f403bbf1-e301-4972-9523-7e6b76354c5e 00:08:18.465 10:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:18.465 [2024-07-24 10:30:25.859056] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:18.465 10:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:18.723 10:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2087523 00:08:18.723 10:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:18.723 10:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:18.723 10:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2087523 /var/tmp/bdevperf.sock 00:08:18.723 10:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2087523 ']' 00:08:18.723 10:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:18.723 10:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:18.723 10:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:18.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:18.723 10:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:18.723 10:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:18.723 [2024-07-24 10:30:26.048314] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:08:18.723 [2024-07-24 10:30:26.048358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087523 ] 00:08:18.723 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.723 [2024-07-24 10:30:26.101395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.723 [2024-07-24 10:30:26.140137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.989 10:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.989 10:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:18.989 10:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:19.313 Nvme0n1 00:08:19.313 10:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:19.313 [ 00:08:19.313 { 00:08:19.313 "name": "Nvme0n1", 00:08:19.313 "aliases": [ 00:08:19.313 "f403bbf1-e301-4972-9523-7e6b76354c5e" 00:08:19.313 ], 00:08:19.313 "product_name": "NVMe disk", 00:08:19.313 "block_size": 4096, 00:08:19.313 "num_blocks": 38912, 00:08:19.313 "uuid": "f403bbf1-e301-4972-9523-7e6b76354c5e", 00:08:19.313 "assigned_rate_limits": { 00:08:19.313 "rw_ios_per_sec": 0, 00:08:19.313 "rw_mbytes_per_sec": 0, 00:08:19.313 "r_mbytes_per_sec": 0, 00:08:19.313 "w_mbytes_per_sec": 0 00:08:19.313 }, 00:08:19.313 "claimed": false, 00:08:19.313 "zoned": false, 00:08:19.313 "supported_io_types": { 00:08:19.313 "read": true, 00:08:19.313 "write": true, 00:08:19.313 "unmap": true, 00:08:19.313 "flush": true, 00:08:19.313 "reset": true, 00:08:19.313 "nvme_admin": true, 00:08:19.313 "nvme_io": true, 00:08:19.313 "nvme_io_md": false, 00:08:19.313 "write_zeroes": true, 00:08:19.313 "zcopy": false, 00:08:19.313 "get_zone_info": false, 00:08:19.313 "zone_management": false, 00:08:19.313 "zone_append": false, 00:08:19.313 "compare": true, 00:08:19.313 "compare_and_write": true, 00:08:19.313 "abort": true, 00:08:19.313 "seek_hole": false, 00:08:19.313 "seek_data": false, 00:08:19.313 "copy": true, 00:08:19.313 "nvme_iov_md": false 00:08:19.313 }, 00:08:19.313 "memory_domains": [ 00:08:19.313 { 00:08:19.313 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:08:19.313 "dma_device_type": 0 00:08:19.313 } 00:08:19.313 ], 00:08:19.313 "driver_specific": { 00:08:19.313 "nvme": [ 00:08:19.313 { 00:08:19.313 "trid": { 00:08:19.313 "trtype": "RDMA", 00:08:19.313 "adrfam": "IPv4", 00:08:19.313 "traddr": "192.168.100.8", 00:08:19.313 "trsvcid": "4420", 00:08:19.313 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:19.313 }, 00:08:19.313 "ctrlr_data": { 00:08:19.313 "cntlid": 1, 00:08:19.313 "vendor_id": "0x8086", 00:08:19.313 "model_number": "SPDK bdev Controller", 00:08:19.313 "serial_number": "SPDK0", 00:08:19.313 "firmware_revision": "24.09", 00:08:19.313 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:19.313 "oacs": { 00:08:19.313 "security": 0, 00:08:19.313 "format": 0, 00:08:19.313 "firmware": 0, 00:08:19.313 "ns_manage": 0 00:08:19.313 }, 00:08:19.313 "multi_ctrlr": true, 00:08:19.313 "ana_reporting": false 00:08:19.313 }, 00:08:19.313 "vs": { 00:08:19.313 "nvme_version": "1.3" 00:08:19.313 }, 00:08:19.313 "ns_data": { 00:08:19.313 "id": 1, 00:08:19.313 "can_share": true 00:08:19.313 } 00:08:19.313 } 00:08:19.313 ], 00:08:19.313 "mp_policy": "active_passive" 00:08:19.313 } 00:08:19.313 } 00:08:19.313 ] 00:08:19.313 10:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2087537 00:08:19.313 10:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:19.313 10:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:19.313 Running I/O for 10 seconds... 00:08:20.691 Latency(us) 00:08:20.691 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.691 Nvme0n1 : 1.00 34626.00 135.26 0.00 0.00 0.00 0.00 0.00 00:08:20.691 =================================================================================================================== 00:08:20.691 Total : 34626.00 135.26 0.00 0.00 0.00 0.00 0.00 00:08:20.691 00:08:21.259 10:30:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 33c58697-2c26-4769-9005-30039e0f9e51 00:08:21.518 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.518 Nvme0n1 : 2.00 34896.00 136.31 0.00 0.00 0.00 0.00 0.00 00:08:21.518 =================================================================================================================== 00:08:21.518 Total : 34896.00 136.31 0.00 0.00 0.00 0.00 0.00 00:08:21.518 00:08:21.518 true 00:08:21.518 10:30:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33c58697-2c26-4769-9005-30039e0f9e51 00:08:21.518 10:30:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:21.776 10:30:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:21.776 10:30:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:21.776 10:30:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2087537 00:08:22.343 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.343 Nvme0n1 : 3.00 34986.67 136.67 0.00 0.00 0.00 0.00 0.00 00:08:22.343 =================================================================================================================== 00:08:22.343 Total : 34986.67 136.67 0.00 0.00 0.00 0.00 0.00 00:08:22.343 00:08:23.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.719 Nvme0n1 : 4.00 34984.50 136.66 0.00 0.00 0.00 0.00 0.00 00:08:23.719 =================================================================================================================== 00:08:23.719 Total : 34984.50 136.66 0.00 0.00 0.00 0.00 0.00 00:08:23.719 00:08:24.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.654 Nvme0n1 : 5.00 35046.60 136.90 0.00 0.00 0.00 0.00 0.00 00:08:24.654 =================================================================================================================== 00:08:24.654 Total : 35046.60 136.90 0.00 0.00 0.00 0.00 0.00 00:08:24.654 00:08:25.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.590 Nvme0n1 : 6.00 35119.83 137.19 0.00 0.00 0.00 0.00 0.00 00:08:25.590 =================================================================================================================== 00:08:25.590 Total : 35119.83 137.19 0.00 0.00 0.00 0.00 0.00 00:08:25.590 00:08:26.525 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.525 Nvme0n1 : 7.00 35150.14 137.31 0.00 0.00 0.00 0.00 0.00 00:08:26.525 =================================================================================================================== 00:08:26.525 Total : 35150.14 137.31 0.00 0.00 0.00 0.00 0.00 00:08:26.525 00:08:27.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.462 Nvme0n1 : 8.00 35187.62 137.45 0.00 0.00 0.00 0.00 0.00 00:08:27.462 =================================================================================================================== 00:08:27.462 Total : 35187.62 137.45 0.00 0.00 0.00 0.00 0.00 00:08:27.462 00:08:28.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.397 Nvme0n1 : 9.00 35214.22 137.56 0.00 0.00 0.00 0.00 0.00 00:08:28.397 =================================================================================================================== 00:08:28.397 Total : 35214.22 137.56 0.00 0.00 0.00 0.00 0.00 00:08:28.397 00:08:29.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.333 Nvme0n1 : 10.00 35235.00 137.64 0.00 0.00 0.00 0.00 0.00 00:08:29.333 =================================================================================================================== 00:08:29.333 Total : 35235.00 137.64 0.00 0.00 0.00 0.00 0.00 00:08:29.333 00:08:29.333 00:08:29.333 Latency(us) 00:08:29.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.333 Nvme0n1 : 10.00 35233.39 137.63 0.00 0.00 3629.86 2418.59 8051.57 00:08:29.333 =================================================================================================================== 00:08:29.333 Total : 35233.39 137.63 0.00 0.00 3629.86 2418.59 8051.57 00:08:29.333 0 00:08:29.333 10:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2087523 00:08:29.333 10:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2087523 ']' 00:08:29.333 10:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2087523 00:08:29.333 10:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:29.592 10:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:29.592 10:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2087523 00:08:29.592 10:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:29.592 10:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:29.592 10:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2087523' 00:08:29.592 killing process with pid 2087523 00:08:29.592 10:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2087523 00:08:29.592 Received shutdown signal, test time was about 10.000000 seconds 00:08:29.592 00:08:29.592 Latency(us) 00:08:29.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.592 =================================================================================================================== 00:08:29.592 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:29.592 10:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2087523 00:08:29.592 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:29.850 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:30.109 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33c58697-2c26-4769-9005-30039e0f9e51 00:08:30.109 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:30.109 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:30.109 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:30.109 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:30.368 [2024-07-24 10:30:37.713307] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:30.368 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33c58697-2c26-4769-9005-30039e0f9e51 00:08:30.368 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:30.368 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33c58697-2c26-4769-9005-30039e0f9e51 00:08:30.368 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:30.368 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.368 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:30.368 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.368 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:30.368 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.368 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:30.368 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:30.368 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33c58697-2c26-4769-9005-30039e0f9e51 00:08:30.627 request: 00:08:30.627 { 00:08:30.627 "uuid": "33c58697-2c26-4769-9005-30039e0f9e51", 00:08:30.627 "method": "bdev_lvol_get_lvstores", 00:08:30.627 "req_id": 1 00:08:30.627 } 00:08:30.627 Got JSON-RPC error response 00:08:30.627 response: 00:08:30.627 { 00:08:30.627 "code": -19, 00:08:30.627 "message": "No such device" 00:08:30.627 } 00:08:30.627 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:30.627 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:30.627 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:30.627 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:30.627 10:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:30.627 aio_bdev 00:08:30.886 10:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f403bbf1-e301-4972-9523-7e6b76354c5e 00:08:30.886 10:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=f403bbf1-e301-4972-9523-7e6b76354c5e 00:08:30.886 10:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:30.886 10:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:30.886 10:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:30.886 10:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:30.886 10:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:30.886 10:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f403bbf1-e301-4972-9523-7e6b76354c5e -t 2000 00:08:31.145 [ 00:08:31.145 { 00:08:31.145 "name": "f403bbf1-e301-4972-9523-7e6b76354c5e", 00:08:31.145 "aliases": [ 00:08:31.145 "lvs/lvol" 00:08:31.145 ], 00:08:31.145 "product_name": "Logical Volume", 00:08:31.145 "block_size": 4096, 00:08:31.145 "num_blocks": 38912, 00:08:31.145 "uuid": "f403bbf1-e301-4972-9523-7e6b76354c5e", 00:08:31.145 "assigned_rate_limits": { 00:08:31.145 "rw_ios_per_sec": 0, 00:08:31.145 "rw_mbytes_per_sec": 0, 00:08:31.145 "r_mbytes_per_sec": 0, 00:08:31.145 "w_mbytes_per_sec": 0 00:08:31.145 }, 00:08:31.145 "claimed": false, 00:08:31.145 "zoned": false, 00:08:31.145 "supported_io_types": { 00:08:31.145 "read": true, 00:08:31.145 "write": true, 00:08:31.145 "unmap": true, 00:08:31.145 "flush": false, 00:08:31.145 "reset": true, 00:08:31.145 "nvme_admin": false, 00:08:31.145 "nvme_io": false, 00:08:31.145 "nvme_io_md": false, 00:08:31.145 "write_zeroes": true, 00:08:31.145 "zcopy": false, 00:08:31.145 "get_zone_info": false, 00:08:31.145 "zone_management": false, 00:08:31.145 "zone_append": false, 00:08:31.145 "compare": false, 00:08:31.145 "compare_and_write": false, 00:08:31.145 "abort": false, 00:08:31.145 "seek_hole": true, 00:08:31.145 "seek_data": true, 00:08:31.145 "copy": false, 00:08:31.145 "nvme_iov_md": false 00:08:31.145 }, 00:08:31.145 "driver_specific": { 00:08:31.145 "lvol": { 00:08:31.145 "lvol_store_uuid": "33c58697-2c26-4769-9005-30039e0f9e51", 00:08:31.145 "base_bdev": "aio_bdev", 00:08:31.145 "thin_provision": false, 00:08:31.145 "num_allocated_clusters": 38, 00:08:31.145 "snapshot": false, 00:08:31.145 "clone": false, 00:08:31.145 "esnap_clone": false 00:08:31.145 } 00:08:31.145 } 00:08:31.145 } 00:08:31.145 ] 00:08:31.145 10:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:31.145 10:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33c58697-2c26-4769-9005-30039e0f9e51 00:08:31.145 10:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:31.145 10:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:31.145 10:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33c58697-2c26-4769-9005-30039e0f9e51 00:08:31.145 10:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:31.403 10:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:31.404 10:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f403bbf1-e301-4972-9523-7e6b76354c5e 00:08:31.662 10:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 33c58697-2c26-4769-9005-30039e0f9e51 00:08:31.662 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:31.920 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:31.920 00:08:31.921 real 0m15.006s 00:08:31.921 user 0m14.893s 00:08:31.921 sys 0m0.995s 00:08:31.921 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.921 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:31.921 ************************************ 00:08:31.921 END TEST lvs_grow_clean 00:08:31.921 ************************************ 00:08:31.921 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:31.921 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:31.921 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.921 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:31.921 ************************************ 00:08:31.921 START TEST lvs_grow_dirty 00:08:31.921 ************************************ 00:08:31.921 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:31.921 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:31.921 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:31.921 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:31.921 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:31.921 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:31.921 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:31.921 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:31.921 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:31.921 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:32.179 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:32.179 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:32.437 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0742f132-92a5-4fe7-829e-5613b01979da 00:08:32.437 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0742f132-92a5-4fe7-829e-5613b01979da 00:08:32.437 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:32.696 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:32.696 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:32.696 10:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0742f132-92a5-4fe7-829e-5613b01979da lvol 150 00:08:32.696 10:30:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=89ee028f-3a77-491c-b898-2405c9719d64 00:08:32.696 10:30:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:32.696 10:30:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:32.955 [2024-07-24 10:30:40.242821] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:32.955 [2024-07-24 10:30:40.242875] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:32.955 true 00:08:32.955 10:30:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0742f132-92a5-4fe7-829e-5613b01979da 00:08:32.955 10:30:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:33.214 10:30:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:33.214 10:30:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:33.214 10:30:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 89ee028f-3a77-491c-b898-2405c9719d64 00:08:33.496 10:30:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:33.496 [2024-07-24 10:30:40.917006] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:33.496 10:30:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:33.755 10:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2090038 00:08:33.755 10:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:33.755 10:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:33.755 10:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2090038 /var/tmp/bdevperf.sock 00:08:33.755 10:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2090038 ']' 00:08:33.755 10:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:33.755 10:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.755 10:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:33.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:33.755 10:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.755 10:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:33.755 [2024-07-24 10:30:41.128054] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:08:33.755 [2024-07-24 10:30:41.128102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2090038 ] 00:08:33.755 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.755 [2024-07-24 10:30:41.184230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.015 [2024-07-24 10:30:41.225087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.015 10:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:34.015 10:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:34.015 10:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:34.274 Nvme0n1 00:08:34.274 10:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:34.534 [ 00:08:34.534 { 00:08:34.534 "name": "Nvme0n1", 00:08:34.534 "aliases": [ 00:08:34.534 "89ee028f-3a77-491c-b898-2405c9719d64" 00:08:34.534 ], 00:08:34.534 "product_name": "NVMe disk", 00:08:34.534 "block_size": 4096, 00:08:34.534 "num_blocks": 38912, 00:08:34.534 "uuid": "89ee028f-3a77-491c-b898-2405c9719d64", 00:08:34.534 "assigned_rate_limits": { 00:08:34.534 "rw_ios_per_sec": 0, 00:08:34.534 "rw_mbytes_per_sec": 0, 00:08:34.534 "r_mbytes_per_sec": 0, 00:08:34.534 "w_mbytes_per_sec": 0 00:08:34.534 }, 00:08:34.534 "claimed": false, 00:08:34.534 "zoned": false, 00:08:34.534 "supported_io_types": { 00:08:34.534 "read": true, 00:08:34.534 "write": true, 00:08:34.534 "unmap": true, 00:08:34.534 "flush": true, 00:08:34.534 "reset": true, 00:08:34.534 "nvme_admin": true, 00:08:34.534 "nvme_io": true, 00:08:34.534 "nvme_io_md": false, 00:08:34.534 "write_zeroes": true, 00:08:34.534 "zcopy": false, 00:08:34.534 "get_zone_info": false, 00:08:34.534 "zone_management": false, 00:08:34.534 "zone_append": false, 00:08:34.534 "compare": true, 00:08:34.534 "compare_and_write": true, 00:08:34.534 "abort": true, 00:08:34.534 "seek_hole": false, 00:08:34.534 "seek_data": false, 00:08:34.534 "copy": true, 00:08:34.534 "nvme_iov_md": false 00:08:34.534 }, 00:08:34.534 "memory_domains": [ 00:08:34.534 { 00:08:34.534 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:08:34.534 "dma_device_type": 0 00:08:34.534 } 00:08:34.534 ], 00:08:34.534 "driver_specific": { 00:08:34.534 "nvme": [ 00:08:34.534 { 00:08:34.534 "trid": { 00:08:34.534 "trtype": "RDMA", 00:08:34.534 "adrfam": "IPv4", 00:08:34.534 "traddr": "192.168.100.8", 00:08:34.534 "trsvcid": "4420", 00:08:34.534 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:34.534 }, 00:08:34.534 "ctrlr_data": { 00:08:34.534 "cntlid": 1, 00:08:34.534 "vendor_id": "0x8086", 00:08:34.534 "model_number": "SPDK bdev Controller", 00:08:34.534 "serial_number": "SPDK0", 00:08:34.534 "firmware_revision": "24.09", 00:08:34.534 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:34.534 "oacs": { 00:08:34.534 "security": 0, 00:08:34.534 "format": 0, 00:08:34.534 "firmware": 0, 00:08:34.534 "ns_manage": 0 00:08:34.534 }, 00:08:34.534 "multi_ctrlr": true, 00:08:34.534 "ana_reporting": false 00:08:34.534 }, 00:08:34.534 "vs": { 00:08:34.534 "nvme_version": "1.3" 00:08:34.534 }, 00:08:34.534 "ns_data": { 00:08:34.534 "id": 1, 00:08:34.534 "can_share": true 00:08:34.534 } 00:08:34.534 } 00:08:34.534 ], 00:08:34.534 "mp_policy": "active_passive" 00:08:34.534 } 00:08:34.534 } 00:08:34.534 ] 00:08:34.534 10:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:34.534 10:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2090140 00:08:34.534 10:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:34.534 Running I/O for 10 seconds... 00:08:35.470 Latency(us) 00:08:35.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.470 Nvme0n1 : 1.00 34592.00 135.12 0.00 0.00 0.00 0.00 0.00 00:08:35.470 =================================================================================================================== 00:08:35.470 Total : 34592.00 135.12 0.00 0.00 0.00 0.00 0.00 00:08:35.470 00:08:36.407 10:30:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0742f132-92a5-4fe7-829e-5613b01979da 00:08:36.407 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.407 Nvme0n1 : 2.00 34849.00 136.13 0.00 0.00 0.00 0.00 0.00 00:08:36.407 =================================================================================================================== 00:08:36.407 Total : 34849.00 136.13 0.00 0.00 0.00 0.00 0.00 00:08:36.407 00:08:36.666 true 00:08:36.666 10:30:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0742f132-92a5-4fe7-829e-5613b01979da 00:08:36.666 10:30:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:36.666 10:30:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:36.666 10:30:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:36.666 10:30:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2090140 00:08:37.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.602 Nvme0n1 : 3.00 34912.00 136.38 0.00 0.00 0.00 0.00 0.00 00:08:37.602 =================================================================================================================== 00:08:37.602 Total : 34912.00 136.38 0.00 0.00 0.00 0.00 0.00 00:08:37.602 00:08:38.538 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.538 Nvme0n1 : 4.00 35017.00 136.79 0.00 0.00 0.00 0.00 0.00 00:08:38.538 =================================================================================================================== 00:08:38.538 Total : 35017.00 136.79 0.00 0.00 0.00 0.00 0.00 00:08:38.538 00:08:39.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.474 Nvme0n1 : 5.00 35084.20 137.05 0.00 0.00 0.00 0.00 0.00 00:08:39.474 =================================================================================================================== 00:08:39.474 Total : 35084.20 137.05 0.00 0.00 0.00 0.00 0.00 00:08:39.474 00:08:40.443 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.443 Nvme0n1 : 6.00 35130.83 137.23 0.00 0.00 0.00 0.00 0.00 00:08:40.443 =================================================================================================================== 00:08:40.443 Total : 35130.83 137.23 0.00 0.00 0.00 0.00 0.00 00:08:40.443 00:08:41.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.413 Nvme0n1 : 7.00 35154.14 137.32 0.00 0.00 0.00 0.00 0.00 00:08:41.413 =================================================================================================================== 00:08:41.413 Total : 35154.14 137.32 0.00 0.00 0.00 0.00 0.00 00:08:41.413 00:08:42.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.787 Nvme0n1 : 8.00 35187.88 137.45 0.00 0.00 0.00 0.00 0.00 00:08:42.787 =================================================================================================================== 00:08:42.787 Total : 35187.88 137.45 0.00 0.00 0.00 0.00 0.00 00:08:42.787 00:08:43.721 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.721 Nvme0n1 : 9.00 35189.11 137.46 0.00 0.00 0.00 0.00 0.00 00:08:43.721 =================================================================================================================== 00:08:43.721 Total : 35189.11 137.46 0.00 0.00 0.00 0.00 0.00 00:08:43.721 00:08:44.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.668 Nvme0n1 : 10.00 35219.00 137.57 0.00 0.00 0.00 0.00 0.00 00:08:44.668 =================================================================================================================== 00:08:44.668 Total : 35219.00 137.57 0.00 0.00 0.00 0.00 0.00 00:08:44.668 00:08:44.668 00:08:44.668 Latency(us) 00:08:44.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.668 Nvme0n1 : 10.00 35217.70 137.57 0.00 0.00 3631.44 2356.18 10111.27 00:08:44.668 =================================================================================================================== 00:08:44.668 Total : 35217.70 137.57 0.00 0.00 3631.44 2356.18 10111.27 00:08:44.668 0 00:08:44.668 10:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2090038 00:08:44.668 10:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2090038 ']' 00:08:44.668 10:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2090038 00:08:44.668 10:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:44.668 10:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:44.668 10:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2090038 00:08:44.668 10:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:44.668 10:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:44.668 10:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2090038' 00:08:44.668 killing process with pid 2090038 00:08:44.668 10:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2090038 00:08:44.668 Received shutdown signal, test time was about 10.000000 seconds 00:08:44.668 00:08:44.668 Latency(us) 00:08:44.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.668 =================================================================================================================== 00:08:44.668 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:44.668 10:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2090038 00:08:44.668 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:44.927 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:45.185 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0742f132-92a5-4fe7-829e-5613b01979da 00:08:45.185 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:45.444 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:45.444 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:45.444 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2087027 00:08:45.444 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2087027 00:08:45.444 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2087027 Killed "${NVMF_APP[@]}" "$@" 00:08:45.444 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:45.444 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:45.444 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:45.444 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:45.444 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:45.444 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2091994 00:08:45.444 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:45.444 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2091994 00:08:45.444 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2091994 ']' 00:08:45.444 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.444 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:45.444 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.444 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:45.444 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:45.444 [2024-07-24 10:30:52.738940] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:08:45.444 [2024-07-24 10:30:52.738989] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.444 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.444 [2024-07-24 10:30:52.794817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.444 [2024-07-24 10:30:52.835191] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.444 [2024-07-24 10:30:52.835228] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.444 [2024-07-24 10:30:52.835235] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.444 [2024-07-24 10:30:52.835240] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.444 [2024-07-24 10:30:52.835245] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.444 [2024-07-24 10:30:52.835267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.703 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:45.703 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:45.703 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:45.703 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:45.703 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:45.703 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.703 10:30:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:45.703 [2024-07-24 10:30:53.105948] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:45.703 [2024-07-24 10:30:53.106038] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:45.703 [2024-07-24 10:30:53.106062] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:45.703 10:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:45.703 10:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 89ee028f-3a77-491c-b898-2405c9719d64 00:08:45.703 10:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=89ee028f-3a77-491c-b898-2405c9719d64 00:08:45.703 10:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:45.703 10:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:45.703 10:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:45.703 10:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:45.703 10:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:45.962 10:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 89ee028f-3a77-491c-b898-2405c9719d64 -t 2000 00:08:46.220 [ 00:08:46.220 { 00:08:46.220 "name": "89ee028f-3a77-491c-b898-2405c9719d64", 00:08:46.220 "aliases": [ 00:08:46.220 "lvs/lvol" 00:08:46.220 ], 00:08:46.220 "product_name": "Logical Volume", 00:08:46.220 "block_size": 4096, 00:08:46.220 "num_blocks": 38912, 00:08:46.220 "uuid": "89ee028f-3a77-491c-b898-2405c9719d64", 00:08:46.220 "assigned_rate_limits": { 00:08:46.220 "rw_ios_per_sec": 0, 00:08:46.220 "rw_mbytes_per_sec": 0, 00:08:46.220 "r_mbytes_per_sec": 0, 00:08:46.220 "w_mbytes_per_sec": 0 00:08:46.220 }, 00:08:46.220 "claimed": false, 00:08:46.220 "zoned": false, 00:08:46.220 "supported_io_types": { 00:08:46.220 "read": true, 00:08:46.220 "write": true, 00:08:46.220 "unmap": true, 00:08:46.220 "flush": false, 00:08:46.220 "reset": true, 00:08:46.220 "nvme_admin": false, 00:08:46.220 "nvme_io": false, 00:08:46.220 "nvme_io_md": false, 00:08:46.220 "write_zeroes": true, 00:08:46.220 "zcopy": false, 00:08:46.220 "get_zone_info": false, 00:08:46.220 "zone_management": false, 00:08:46.220 "zone_append": false, 00:08:46.220 "compare": false, 00:08:46.220 "compare_and_write": false, 00:08:46.220 "abort": false, 00:08:46.220 "seek_hole": true, 00:08:46.220 "seek_data": true, 00:08:46.220 "copy": false, 00:08:46.220 "nvme_iov_md": false 00:08:46.220 }, 00:08:46.220 "driver_specific": { 00:08:46.220 "lvol": { 00:08:46.220 "lvol_store_uuid": "0742f132-92a5-4fe7-829e-5613b01979da", 00:08:46.220 "base_bdev": "aio_bdev", 00:08:46.220 "thin_provision": false, 00:08:46.220 "num_allocated_clusters": 38, 00:08:46.220 "snapshot": false, 00:08:46.220 "clone": false, 00:08:46.220 "esnap_clone": false 00:08:46.220 } 00:08:46.220 } 00:08:46.220 } 00:08:46.220 ] 00:08:46.220 10:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:46.220 10:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0742f132-92a5-4fe7-829e-5613b01979da 00:08:46.220 10:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:46.220 10:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:46.220 10:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0742f132-92a5-4fe7-829e-5613b01979da 00:08:46.220 10:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:46.478 10:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:46.478 10:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:46.738 [2024-07-24 10:30:53.958544] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:46.738 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0742f132-92a5-4fe7-829e-5613b01979da 00:08:46.738 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:46.738 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0742f132-92a5-4fe7-829e-5613b01979da 00:08:46.738 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:46.738 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:46.738 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:46.738 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:46.738 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:46.738 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:46.738 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:46.738 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:46.738 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0742f132-92a5-4fe7-829e-5613b01979da 00:08:46.738 request: 00:08:46.738 { 00:08:46.738 "uuid": "0742f132-92a5-4fe7-829e-5613b01979da", 00:08:46.738 "method": "bdev_lvol_get_lvstores", 00:08:46.738 "req_id": 1 00:08:46.738 } 00:08:46.738 Got JSON-RPC error response 00:08:46.738 response: 00:08:46.738 { 00:08:46.738 "code": -19, 00:08:46.738 "message": "No such device" 00:08:46.738 } 00:08:46.738 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:46.738 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:46.738 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:46.738 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:46.738 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:46.996 aio_bdev 00:08:46.996 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 89ee028f-3a77-491c-b898-2405c9719d64 00:08:46.996 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=89ee028f-3a77-491c-b898-2405c9719d64 00:08:46.996 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:46.996 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:46.996 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:46.996 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:46.996 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:47.255 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 89ee028f-3a77-491c-b898-2405c9719d64 -t 2000 00:08:47.255 [ 00:08:47.255 { 00:08:47.255 "name": "89ee028f-3a77-491c-b898-2405c9719d64", 00:08:47.255 "aliases": [ 00:08:47.255 "lvs/lvol" 00:08:47.255 ], 00:08:47.255 "product_name": "Logical Volume", 00:08:47.255 "block_size": 4096, 00:08:47.255 "num_blocks": 38912, 00:08:47.255 "uuid": "89ee028f-3a77-491c-b898-2405c9719d64", 00:08:47.255 "assigned_rate_limits": { 00:08:47.255 "rw_ios_per_sec": 0, 00:08:47.255 "rw_mbytes_per_sec": 0, 00:08:47.255 "r_mbytes_per_sec": 0, 00:08:47.255 "w_mbytes_per_sec": 0 00:08:47.255 }, 00:08:47.255 "claimed": false, 00:08:47.255 "zoned": false, 00:08:47.255 "supported_io_types": { 00:08:47.255 "read": true, 00:08:47.255 "write": true, 00:08:47.255 "unmap": true, 00:08:47.255 "flush": false, 00:08:47.255 "reset": true, 00:08:47.255 "nvme_admin": false, 00:08:47.255 "nvme_io": false, 00:08:47.255 "nvme_io_md": false, 00:08:47.255 "write_zeroes": true, 00:08:47.255 "zcopy": false, 00:08:47.255 "get_zone_info": false, 00:08:47.255 "zone_management": false, 00:08:47.255 "zone_append": false, 00:08:47.255 "compare": false, 00:08:47.255 "compare_and_write": false, 00:08:47.255 "abort": false, 00:08:47.255 "seek_hole": true, 00:08:47.255 "seek_data": true, 00:08:47.255 "copy": false, 00:08:47.255 "nvme_iov_md": false 00:08:47.255 }, 00:08:47.255 "driver_specific": { 00:08:47.255 "lvol": { 00:08:47.255 "lvol_store_uuid": "0742f132-92a5-4fe7-829e-5613b01979da", 00:08:47.255 "base_bdev": "aio_bdev", 00:08:47.255 "thin_provision": false, 00:08:47.255 "num_allocated_clusters": 38, 00:08:47.255 "snapshot": false, 00:08:47.255 "clone": false, 00:08:47.255 "esnap_clone": false 00:08:47.255 } 00:08:47.255 } 00:08:47.255 } 00:08:47.255 ] 00:08:47.255 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:47.255 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0742f132-92a5-4fe7-829e-5613b01979da 00:08:47.255 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:47.513 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:47.513 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0742f132-92a5-4fe7-829e-5613b01979da 00:08:47.513 10:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:47.770 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:47.770 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 89ee028f-3a77-491c-b898-2405c9719d64 00:08:47.770 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0742f132-92a5-4fe7-829e-5613b01979da 00:08:48.029 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:48.288 00:08:48.288 real 0m16.218s 00:08:48.288 user 0m42.961s 00:08:48.288 sys 0m2.805s 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:48.288 ************************************ 00:08:48.288 END TEST lvs_grow_dirty 00:08:48.288 ************************************ 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:48.288 nvmf_trace.0 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:48.288 rmmod nvme_rdma 00:08:48.288 rmmod nvme_fabrics 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2091994 ']' 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2091994 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2091994 ']' 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2091994 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:48.288 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2091994 00:08:48.547 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:48.547 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:48.547 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2091994' 00:08:48.547 killing process with pid 2091994 00:08:48.547 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2091994 00:08:48.547 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2091994 00:08:48.547 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:48.547 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:48.547 00:08:48.547 real 0m37.758s 00:08:48.547 user 1m2.923s 00:08:48.547 sys 0m8.297s 00:08:48.547 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.547 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.547 ************************************ 00:08:48.547 END TEST nvmf_lvs_grow 00:08:48.547 ************************************ 00:08:48.547 10:30:55 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:08:48.547 10:30:55 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:48.547 10:30:55 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.547 10:30:55 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:48.547 ************************************ 00:08:48.547 START TEST nvmf_bdev_io_wait 00:08:48.547 ************************************ 00:08:48.547 10:30:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:08:48.805 * Looking for test storage... 00:08:48.805 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:48.805 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.805 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:48.805 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.805 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.805 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.805 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.805 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.805 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.805 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.805 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.805 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.805 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.805 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:48.805 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:08:48.805 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.805 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:08:48.806 10:30:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:54.083 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:54.083 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:54.083 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:54.084 Found net devices under 0000:da:00.0: mlx_0_0 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:54.084 Found net devices under 0000:da:00.1: mlx_0_1 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:54.084 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:54.084 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:08:54.084 altname enp218s0f0np0 00:08:54.084 altname ens818f0np0 00:08:54.084 inet 192.168.100.8/24 scope global mlx_0_0 00:08:54.084 valid_lft forever preferred_lft forever 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:54.084 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:54.084 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:08:54.084 altname enp218s0f1np1 00:08:54.084 altname ens818f1np1 00:08:54.084 inet 192.168.100.9/24 scope global mlx_0_1 00:08:54.084 valid_lft forever preferred_lft forever 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:08:54.084 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:54.085 192.168.100.9' 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:54.085 192.168.100.9' 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:54.085 192.168.100.9' 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2095575 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2095575 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2095575 ']' 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.085 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.344 [2024-07-24 10:31:01.577189] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:08:54.344 [2024-07-24 10:31:01.577239] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.344 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.344 [2024-07-24 10:31:01.633520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:54.344 [2024-07-24 10:31:01.680980] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.344 [2024-07-24 10:31:01.681020] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.344 [2024-07-24 10:31:01.681028] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.344 [2024-07-24 10:31:01.681035] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.344 [2024-07-24 10:31:01.681044] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.344 [2024-07-24 10:31:01.681087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.344 [2024-07-24 10:31:01.681205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.344 [2024-07-24 10:31:01.681293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.345 [2024-07-24 10:31:01.681294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.345 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.345 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:54.345 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:54.345 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:54.345 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.345 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.345 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:54.345 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.345 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.345 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.345 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:54.345 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.345 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.605 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.605 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:54.605 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.605 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.605 [2024-07-24 10:31:01.849231] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf4a780/0xf4ec70) succeed. 00:08:54.605 [2024-07-24 10:31:01.858084] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf4bd70/0xf90300) succeed. 00:08:54.605 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.605 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:54.605 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.605 10:31:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.605 Malloc0 00:08:54.605 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.605 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:54.605 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.605 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.605 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.606 [2024-07-24 10:31:02.034036] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2095818 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2095820 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:54.606 { 00:08:54.606 "params": { 00:08:54.606 "name": "Nvme$subsystem", 00:08:54.606 "trtype": "$TEST_TRANSPORT", 00:08:54.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:54.606 "adrfam": "ipv4", 00:08:54.606 "trsvcid": "$NVMF_PORT", 00:08:54.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:54.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:54.606 "hdgst": ${hdgst:-false}, 00:08:54.606 "ddgst": ${ddgst:-false} 00:08:54.606 }, 00:08:54.606 "method": "bdev_nvme_attach_controller" 00:08:54.606 } 00:08:54.606 EOF 00:08:54.606 )") 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2095822 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2095825 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:54.606 { 00:08:54.606 "params": { 00:08:54.606 "name": "Nvme$subsystem", 00:08:54.606 "trtype": "$TEST_TRANSPORT", 00:08:54.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:54.606 "adrfam": "ipv4", 00:08:54.606 "trsvcid": "$NVMF_PORT", 00:08:54.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:54.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:54.606 "hdgst": ${hdgst:-false}, 00:08:54.606 "ddgst": ${ddgst:-false} 00:08:54.606 }, 00:08:54.606 "method": "bdev_nvme_attach_controller" 00:08:54.606 } 00:08:54.606 EOF 00:08:54.606 )") 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:54.606 { 00:08:54.606 "params": { 00:08:54.606 "name": "Nvme$subsystem", 00:08:54.606 "trtype": "$TEST_TRANSPORT", 00:08:54.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:54.606 "adrfam": "ipv4", 00:08:54.606 "trsvcid": "$NVMF_PORT", 00:08:54.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:54.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:54.606 "hdgst": ${hdgst:-false}, 00:08:54.606 "ddgst": ${ddgst:-false} 00:08:54.606 }, 00:08:54.606 "method": "bdev_nvme_attach_controller" 00:08:54.606 } 00:08:54.606 EOF 00:08:54.606 )") 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:54.606 { 00:08:54.606 "params": { 00:08:54.606 "name": "Nvme$subsystem", 00:08:54.606 "trtype": "$TEST_TRANSPORT", 00:08:54.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:54.606 "adrfam": "ipv4", 00:08:54.606 "trsvcid": "$NVMF_PORT", 00:08:54.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:54.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:54.606 "hdgst": ${hdgst:-false}, 00:08:54.606 "ddgst": ${ddgst:-false} 00:08:54.606 }, 00:08:54.606 "method": "bdev_nvme_attach_controller" 00:08:54.606 } 00:08:54.606 EOF 00:08:54.606 )") 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2095818 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:54.606 "params": { 00:08:54.606 "name": "Nvme1", 00:08:54.606 "trtype": "rdma", 00:08:54.606 "traddr": "192.168.100.8", 00:08:54.606 "adrfam": "ipv4", 00:08:54.606 "trsvcid": "4420", 00:08:54.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:54.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:54.606 "hdgst": false, 00:08:54.606 "ddgst": false 00:08:54.606 }, 00:08:54.606 "method": "bdev_nvme_attach_controller" 00:08:54.606 }' 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:54.606 "params": { 00:08:54.606 "name": "Nvme1", 00:08:54.606 "trtype": "rdma", 00:08:54.606 "traddr": "192.168.100.8", 00:08:54.606 "adrfam": "ipv4", 00:08:54.606 "trsvcid": "4420", 00:08:54.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:54.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:54.606 "hdgst": false, 00:08:54.606 "ddgst": false 00:08:54.606 }, 00:08:54.606 "method": "bdev_nvme_attach_controller" 00:08:54.606 }' 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:54.606 "params": { 00:08:54.606 "name": "Nvme1", 00:08:54.606 "trtype": "rdma", 00:08:54.606 "traddr": "192.168.100.8", 00:08:54.606 "adrfam": "ipv4", 00:08:54.606 "trsvcid": "4420", 00:08:54.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:54.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:54.606 "hdgst": false, 00:08:54.606 "ddgst": false 00:08:54.606 }, 00:08:54.606 "method": "bdev_nvme_attach_controller" 00:08:54.606 }' 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:54.606 10:31:02 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:54.606 "params": { 00:08:54.606 "name": "Nvme1", 00:08:54.606 "trtype": "rdma", 00:08:54.606 "traddr": "192.168.100.8", 00:08:54.606 "adrfam": "ipv4", 00:08:54.606 "trsvcid": "4420", 00:08:54.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:54.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:54.606 "hdgst": false, 00:08:54.606 "ddgst": false 00:08:54.607 }, 00:08:54.607 "method": "bdev_nvme_attach_controller" 00:08:54.607 }' 00:08:54.865 [2024-07-24 10:31:02.082420] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:08:54.865 [2024-07-24 10:31:02.082474] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:54.865 [2024-07-24 10:31:02.083180] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:08:54.865 [2024-07-24 10:31:02.083224] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:54.865 [2024-07-24 10:31:02.084097] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:08:54.865 [2024-07-24 10:31:02.084140] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:54.865 [2024-07-24 10:31:02.085878] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:08:54.865 [2024-07-24 10:31:02.085917] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:54.865 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.865 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.865 [2024-07-24 10:31:02.268541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.865 [2024-07-24 10:31:02.295235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:55.124 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.124 [2024-07-24 10:31:02.359583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.124 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.124 [2024-07-24 10:31:02.393421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:55.124 [2024-07-24 10:31:02.404467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.124 [2024-07-24 10:31:02.430317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:55.124 [2024-07-24 10:31:02.464489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.124 [2024-07-24 10:31:02.491562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:55.383 Running I/O for 1 seconds... 00:08:55.383 Running I/O for 1 seconds... 00:08:55.383 Running I/O for 1 seconds... 00:08:55.383 Running I/O for 1 seconds... 00:08:56.316 00:08:56.316 Latency(us) 00:08:56.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.316 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:56.316 Nvme1n1 : 1.01 17041.33 66.57 0.00 0.00 7487.97 4088.20 14667.58 00:08:56.316 =================================================================================================================== 00:08:56.316 Total : 17041.33 66.57 0.00 0.00 7487.97 4088.20 14667.58 00:08:56.316 00:08:56.316 Latency(us) 00:08:56.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.316 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:56.316 Nvme1n1 : 1.00 17527.04 68.46 0.00 0.00 7283.89 4400.27 17226.61 00:08:56.316 =================================================================================================================== 00:08:56.316 Total : 17527.04 68.46 0.00 0.00 7283.89 4400.27 17226.61 00:08:56.316 00:08:56.316 Latency(us) 00:08:56.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.316 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:56.316 Nvme1n1 : 1.00 254196.64 992.96 0.00 0.00 501.88 202.85 1966.08 00:08:56.316 =================================================================================================================== 00:08:56.316 Total : 254196.64 992.96 0.00 0.00 501.88 202.85 1966.08 00:08:56.316 00:08:56.316 Latency(us) 00:08:56.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.316 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:56.316 Nvme1n1 : 1.00 14070.10 54.96 0.00 0.00 9074.55 4025.78 20472.20 00:08:56.316 =================================================================================================================== 00:08:56.316 Total : 14070.10 54.96 0.00 0.00 9074.55 4025.78 20472.20 00:08:56.574 10:31:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2095820 00:08:56.574 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2095822 00:08:56.574 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2095825 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:56.834 rmmod nvme_rdma 00:08:56.834 rmmod nvme_fabrics 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2095575 ']' 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2095575 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2095575 ']' 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2095575 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2095575 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2095575' 00:08:56.834 killing process with pid 2095575 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2095575 00:08:56.834 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2095575 00:08:57.092 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:57.092 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:57.092 00:08:57.092 real 0m8.414s 00:08:57.092 user 0m17.547s 00:08:57.092 sys 0m5.333s 00:08:57.093 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.093 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.093 ************************************ 00:08:57.093 END TEST nvmf_bdev_io_wait 00:08:57.093 ************************************ 00:08:57.093 10:31:04 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:08:57.093 10:31:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:57.093 10:31:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.093 10:31:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:57.093 ************************************ 00:08:57.093 START TEST nvmf_queue_depth 00:08:57.093 ************************************ 00:08:57.093 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:08:57.351 * Looking for test storage... 00:08:57.351 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.351 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:57.352 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:57.352 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:57.352 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.352 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.352 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.352 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:57.352 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:57.352 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:08:57.352 10:31:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.620 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:02.621 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:02.621 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:02.621 Found net devices under 0000:da:00.0: mlx_0_0 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:02.621 Found net devices under 0000:da:00.1: mlx_0_1 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:02.621 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:02.621 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:09:02.621 altname enp218s0f0np0 00:09:02.621 altname ens818f0np0 00:09:02.621 inet 192.168.100.8/24 scope global mlx_0_0 00:09:02.621 valid_lft forever preferred_lft forever 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:02.621 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:02.621 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:09:02.621 altname enp218s0f1np1 00:09:02.621 altname ens818f1np1 00:09:02.621 inet 192.168.100.9/24 scope global mlx_0_1 00:09:02.621 valid_lft forever preferred_lft forever 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:02.621 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:02.622 192.168.100.9' 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:02.622 192.168.100.9' 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:02.622 192.168.100.9' 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2099140 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2099140 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2099140 ']' 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.622 [2024-07-24 10:31:09.721048] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:09:02.622 [2024-07-24 10:31:09.721092] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.622 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.622 [2024-07-24 10:31:09.776394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.622 [2024-07-24 10:31:09.814987] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.622 [2024-07-24 10:31:09.815026] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.622 [2024-07-24 10:31:09.815033] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.622 [2024-07-24 10:31:09.815038] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.622 [2024-07-24 10:31:09.815043] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.622 [2024-07-24 10:31:09.815078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.622 10:31:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.622 [2024-07-24 10:31:09.957366] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15cb5a0/0x15cfa50) succeed. 00:09:02.622 [2024-07-24 10:31:09.965889] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15cca50/0x16110e0) succeed. 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.622 Malloc0 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.622 [2024-07-24 10:31:10.054846] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2099278 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2099278 /var/tmp/bdevperf.sock 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2099278 ']' 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:02.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:02.622 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.881 [2024-07-24 10:31:10.102514] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:09:02.881 [2024-07-24 10:31:10.102556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2099278 ] 00:09:02.881 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.881 [2024-07-24 10:31:10.156328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.881 [2024-07-24 10:31:10.196369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.881 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.881 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:02.881 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:02.881 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.881 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.139 NVMe0n1 00:09:03.139 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.139 10:31:10 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:03.139 Running I/O for 10 seconds... 00:09:13.136 00:09:13.136 Latency(us) 00:09:13.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.136 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:13.136 Verification LBA range: start 0x0 length 0x4000 00:09:13.136 NVMe0n1 : 10.03 17556.14 68.58 0.00 0.00 58181.59 22843.98 38447.79 00:09:13.136 =================================================================================================================== 00:09:13.136 Total : 17556.14 68.58 0.00 0.00 58181.59 22843.98 38447.79 00:09:13.136 0 00:09:13.136 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2099278 00:09:13.137 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2099278 ']' 00:09:13.137 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2099278 00:09:13.137 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:13.137 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:13.137 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2099278 00:09:13.137 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:13.137 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:13.137 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2099278' 00:09:13.137 killing process with pid 2099278 00:09:13.137 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2099278 00:09:13.137 Received shutdown signal, test time was about 10.000000 seconds 00:09:13.137 00:09:13.137 Latency(us) 00:09:13.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.137 =================================================================================================================== 00:09:13.137 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:13.137 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2099278 00:09:13.395 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:13.395 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:13.395 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:13.396 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:13.396 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:13.396 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:13.396 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:13.396 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:13.396 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:13.396 rmmod nvme_rdma 00:09:13.396 rmmod nvme_fabrics 00:09:13.396 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:13.396 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:13.396 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:13.396 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2099140 ']' 00:09:13.396 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2099140 00:09:13.396 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2099140 ']' 00:09:13.396 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2099140 00:09:13.396 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:13.396 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:13.396 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2099140 00:09:13.654 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:13.654 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:13.654 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2099140' 00:09:13.654 killing process with pid 2099140 00:09:13.654 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2099140 00:09:13.654 10:31:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2099140 00:09:13.654 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:13.654 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:13.654 00:09:13.654 real 0m16.619s 00:09:13.654 user 0m23.454s 00:09:13.654 sys 0m4.441s 00:09:13.654 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.654 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.654 ************************************ 00:09:13.654 END TEST nvmf_queue_depth 00:09:13.654 ************************************ 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:13.912 ************************************ 00:09:13.912 START TEST nvmf_target_multipath 00:09:13.912 ************************************ 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:09:13.912 * Looking for test storage... 00:09:13.912 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:09:13.912 10:31:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:20.472 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:20.472 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.472 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:20.472 Found net devices under 0000:da:00.0: mlx_0_0 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:20.473 Found net devices under 0000:da:00.1: mlx_0_1 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:20.473 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:20.473 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:09:20.473 altname enp218s0f0np0 00:09:20.473 altname ens818f0np0 00:09:20.473 inet 192.168.100.8/24 scope global mlx_0_0 00:09:20.473 valid_lft forever preferred_lft forever 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:20.473 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:20.473 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:09:20.473 altname enp218s0f1np1 00:09:20.473 altname ens818f1np1 00:09:20.473 inet 192.168.100.9/24 scope global mlx_0_1 00:09:20.473 valid_lft forever preferred_lft forever 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:20.473 192.168.100.9' 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:20.473 192.168.100.9' 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:20.473 192.168.100.9' 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:09:20.473 run this test only with TCP transport for now 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:20.473 rmmod nvme_rdma 00:09:20.473 rmmod nvme_fabrics 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:20.473 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:20.474 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:20.474 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:20.474 00:09:20.474 real 0m5.766s 00:09:20.474 user 0m1.686s 00:09:20.474 sys 0m4.214s 00:09:20.474 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.474 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:20.474 ************************************ 00:09:20.474 END TEST nvmf_target_multipath 00:09:20.474 ************************************ 00:09:20.474 10:31:26 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:09:20.474 10:31:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:20.474 10:31:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.474 10:31:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:20.474 ************************************ 00:09:20.474 START TEST nvmf_zcopy 00:09:20.474 ************************************ 00:09:20.474 10:31:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:09:20.474 * Looking for test storage... 00:09:20.474 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:09:20.474 10:31:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:25.740 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:25.740 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:25.741 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:25.741 Found net devices under 0000:da:00.0: mlx_0_0 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:25.741 Found net devices under 0000:da:00.1: mlx_0_1 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:25.741 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:25.741 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:09:25.741 altname enp218s0f0np0 00:09:25.741 altname ens818f0np0 00:09:25.741 inet 192.168.100.8/24 scope global mlx_0_0 00:09:25.741 valid_lft forever preferred_lft forever 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:25.741 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:25.741 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:09:25.741 altname enp218s0f1np1 00:09:25.741 altname ens818f1np1 00:09:25.741 inet 192.168.100.9/24 scope global mlx_0_1 00:09:25.741 valid_lft forever preferred_lft forever 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:25.741 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:25.742 192.168.100.9' 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:25.742 192.168.100.9' 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:25.742 192.168.100.9' 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2107336 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2107336 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2107336 ']' 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.742 [2024-07-24 10:31:32.714827] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:09:25.742 [2024-07-24 10:31:32.714869] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.742 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.742 [2024-07-24 10:31:32.770284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.742 [2024-07-24 10:31:32.810803] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.742 [2024-07-24 10:31:32.810840] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.742 [2024-07-24 10:31:32.810847] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.742 [2024-07-24 10:31:32.810852] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.742 [2024-07-24 10:31:32.810857] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.742 [2024-07-24 10:31:32.810873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:09:25.742 Unsupported transport: rdma 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@808 -- # type=--id 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@809 -- # id=0 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:25.742 nvmf_trace.0 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@823 -- # return 0 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:25.742 10:31:32 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:25.742 rmmod nvme_rdma 00:09:25.742 rmmod nvme_fabrics 00:09:25.742 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:25.742 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:25.742 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:25.742 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2107336 ']' 00:09:25.742 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2107336 00:09:25.742 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2107336 ']' 00:09:25.742 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2107336 00:09:25.742 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:25.742 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:25.742 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2107336 00:09:25.742 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:25.742 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:25.742 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2107336' 00:09:25.742 killing process with pid 2107336 00:09:25.742 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2107336 00:09:25.742 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2107336 00:09:26.006 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:26.006 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:26.006 00:09:26.006 real 0m6.233s 00:09:26.006 user 0m2.218s 00:09:26.006 sys 0m4.493s 00:09:26.006 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.006 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.006 ************************************ 00:09:26.006 END TEST nvmf_zcopy 00:09:26.006 ************************************ 00:09:26.006 10:31:33 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:09:26.006 10:31:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:26.006 10:31:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.006 10:31:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:26.006 ************************************ 00:09:26.006 START TEST nvmf_nmic 00:09:26.006 ************************************ 00:09:26.006 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:09:26.006 * Looking for test storage... 00:09:26.006 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:26.006 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.006 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:26.006 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.006 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.006 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.006 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.006 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:09:26.007 10:31:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:31.871 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:31.871 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:31.871 Found net devices under 0000:da:00.0: mlx_0_0 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:31.871 Found net devices under 0000:da:00.1: mlx_0_1 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:31.871 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:31.872 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:31.872 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:09:31.872 altname enp218s0f0np0 00:09:31.872 altname ens818f0np0 00:09:31.872 inet 192.168.100.8/24 scope global mlx_0_0 00:09:31.872 valid_lft forever preferred_lft forever 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:31.872 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:31.872 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:09:31.872 altname enp218s0f1np1 00:09:31.872 altname ens818f1np1 00:09:31.872 inet 192.168.100.9/24 scope global mlx_0_1 00:09:31.872 valid_lft forever preferred_lft forever 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:31.872 192.168.100.9' 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:31.872 192.168.100.9' 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:31.872 192.168.100.9' 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2110416 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2110416 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2110416 ']' 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:31.872 10:31:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.872 [2024-07-24 10:31:38.837371] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:09:31.872 [2024-07-24 10:31:38.837421] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.872 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.873 [2024-07-24 10:31:38.893645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.873 [2024-07-24 10:31:38.938918] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.873 [2024-07-24 10:31:38.938955] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.873 [2024-07-24 10:31:38.938973] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.873 [2024-07-24 10:31:38.938979] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.873 [2024-07-24 10:31:38.938984] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.873 [2024-07-24 10:31:38.939029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.873 [2024-07-24 10:31:38.939143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.873 [2024-07-24 10:31:38.939230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.873 [2024-07-24 10:31:38.939231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.873 [2024-07-24 10:31:39.110526] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xebb6a0/0xebfb70) succeed. 00:09:31.873 [2024-07-24 10:31:39.119541] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xebcc90/0xf01200) succeed. 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.873 Malloc0 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.873 [2024-07-24 10:31:39.283891] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:31.873 test case1: single bdev can't be used in multiple subsystems 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.873 [2024-07-24 10:31:39.307710] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:31.873 [2024-07-24 10:31:39.307733] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:31.873 [2024-07-24 10:31:39.307740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.873 request: 00:09:31.873 { 00:09:31.873 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:31.873 "namespace": { 00:09:31.873 "bdev_name": "Malloc0", 00:09:31.873 "no_auto_visible": false 00:09:31.873 }, 00:09:31.873 "method": "nvmf_subsystem_add_ns", 00:09:31.873 "req_id": 1 00:09:31.873 } 00:09:31.873 Got JSON-RPC error response 00:09:31.873 response: 00:09:31.873 { 00:09:31.873 "code": -32602, 00:09:31.873 "message": "Invalid parameters" 00:09:31.873 } 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:31.873 Adding namespace failed - expected result. 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:31.873 test case2: host connect to nvmf target in multiple paths 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.873 [2024-07-24 10:31:39.319783] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:09:31.873 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.132 10:31:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:33.089 10:31:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:09:34.022 10:31:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:34.022 10:31:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:34.022 10:31:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:34.022 10:31:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:34.022 10:31:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:35.923 10:31:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:35.923 10:31:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:35.923 10:31:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:35.923 10:31:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:35.923 10:31:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:35.923 10:31:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:35.923 10:31:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:35.923 [global] 00:09:35.923 thread=1 00:09:35.923 invalidate=1 00:09:35.923 rw=write 00:09:35.923 time_based=1 00:09:35.923 runtime=1 00:09:35.923 ioengine=libaio 00:09:35.923 direct=1 00:09:35.923 bs=4096 00:09:35.923 iodepth=1 00:09:35.923 norandommap=0 00:09:35.923 numjobs=1 00:09:35.923 00:09:35.923 verify_dump=1 00:09:35.923 verify_backlog=512 00:09:35.923 verify_state_save=0 00:09:35.923 do_verify=1 00:09:35.923 verify=crc32c-intel 00:09:35.923 [job0] 00:09:35.923 filename=/dev/nvme0n1 00:09:35.923 Could not set queue depth (nvme0n1) 00:09:36.180 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.180 fio-3.35 00:09:36.180 Starting 1 thread 00:09:37.555 00:09:37.555 job0: (groupid=0, jobs=1): err= 0: pid=2111491: Wed Jul 24 10:31:44 2024 00:09:37.555 read: IOPS=7317, BW=28.6MiB/s (30.0MB/s)(28.6MiB/1001msec) 00:09:37.555 slat (nsec): min=6222, max=26002, avg=7077.47, stdev=768.96 00:09:37.555 clat (nsec): min=41205, max=83923, avg=57992.35, stdev=3731.75 00:09:37.555 lat (nsec): min=54842, max=93353, avg=65069.82, stdev=3793.17 00:09:37.555 clat percentiles (nsec): 00:09:37.555 | 1.00th=[50944], 5.00th=[52480], 10.00th=[52992], 20.00th=[54528], 00:09:37.555 | 30.00th=[56064], 40.00th=[57088], 50.00th=[58112], 60.00th=[59136], 00:09:37.555 | 70.00th=[60160], 80.00th=[61184], 90.00th=[62720], 95.00th=[64256], 00:09:37.555 | 99.00th=[67072], 99.50th=[69120], 99.90th=[73216], 99.95th=[77312], 00:09:37.555 | 99.99th=[83456] 00:09:37.555 write: IOPS=7672, BW=30.0MiB/s (31.4MB/s)(30.0MiB/1001msec); 0 zone resets 00:09:37.555 slat (nsec): min=8052, max=39240, avg=8899.17, stdev=883.19 00:09:37.555 clat (nsec): min=46656, max=92812, avg=55499.17, stdev=3916.52 00:09:37.555 lat (usec): min=54, max=132, avg=64.40, stdev= 4.06 00:09:37.555 clat percentiles (nsec): 00:09:37.555 | 1.00th=[48384], 5.00th=[49920], 10.00th=[50944], 20.00th=[51968], 00:09:37.555 | 30.00th=[52992], 40.00th=[54016], 50.00th=[55040], 60.00th=[56064], 00:09:37.555 | 70.00th=[57600], 80.00th=[58624], 90.00th=[60672], 95.00th=[62208], 00:09:37.555 | 99.00th=[65280], 99.50th=[67072], 99.90th=[72192], 99.95th=[76288], 00:09:37.555 | 99.99th=[92672] 00:09:37.555 bw ( KiB/s): min=31336, max=31336, per=100.00%, avg=31336.00, stdev= 0.00, samples=1 00:09:37.555 iops : min= 7834, max= 7834, avg=7834.00, stdev= 0.00, samples=1 00:09:37.555 lat (usec) : 50=3.23%, 100=96.77% 00:09:37.555 cpu : usr=7.90%, sys=16.10%, ctx=15005, majf=0, minf=2 00:09:37.555 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:37.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.555 issued rwts: total=7325,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.555 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:37.555 00:09:37.555 Run status group 0 (all jobs): 00:09:37.555 READ: bw=28.6MiB/s (30.0MB/s), 28.6MiB/s-28.6MiB/s (30.0MB/s-30.0MB/s), io=28.6MiB (30.0MB), run=1001-1001msec 00:09:37.555 WRITE: bw=30.0MiB/s (31.4MB/s), 30.0MiB/s-30.0MiB/s (31.4MB/s-31.4MB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:09:37.555 00:09:37.555 Disk stats (read/write): 00:09:37.555 nvme0n1: ios=6706/6818, merge=0/0, ticks=365/330, in_queue=695, util=90.68% 00:09:37.555 10:31:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:39.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:39.456 rmmod nvme_rdma 00:09:39.456 rmmod nvme_fabrics 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2110416 ']' 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2110416 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2110416 ']' 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2110416 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2110416 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2110416' 00:09:39.456 killing process with pid 2110416 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2110416 00:09:39.456 10:31:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2110416 00:09:39.715 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:39.715 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:39.715 00:09:39.715 real 0m13.818s 00:09:39.715 user 0m39.835s 00:09:39.715 sys 0m4.824s 00:09:39.715 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.715 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.715 ************************************ 00:09:39.715 END TEST nvmf_nmic 00:09:39.715 ************************************ 00:09:39.715 10:31:47 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:09:39.715 10:31:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:39.715 10:31:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.715 10:31:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:39.975 ************************************ 00:09:39.975 START TEST nvmf_fio_target 00:09:39.975 ************************************ 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:09:39.975 * Looking for test storage... 00:09:39.975 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.975 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:09:39.976 10:31:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:45.249 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:45.249 10:31:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:45.249 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:45.249 Found net devices under 0000:da:00.0: mlx_0_0 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:45.249 Found net devices under 0000:da:00.1: mlx_0_1 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:45.249 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:45.250 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:45.250 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:09:45.250 altname enp218s0f0np0 00:09:45.250 altname ens818f0np0 00:09:45.250 inet 192.168.100.8/24 scope global mlx_0_0 00:09:45.250 valid_lft forever preferred_lft forever 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:45.250 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:45.250 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:09:45.250 altname enp218s0f1np1 00:09:45.250 altname ens818f1np1 00:09:45.250 inet 192.168.100.9/24 scope global mlx_0_1 00:09:45.250 valid_lft forever preferred_lft forever 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:45.250 192.168.100.9' 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:45.250 192.168.100.9' 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:45.250 192.168.100.9' 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2115021 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2115021 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2115021 ']' 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.250 [2024-07-24 10:31:52.262462] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:09:45.250 [2024-07-24 10:31:52.262523] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.250 EAL: No free 2048 kB hugepages reported on node 1 00:09:45.250 [2024-07-24 10:31:52.318991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:45.250 [2024-07-24 10:31:52.360561] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.250 [2024-07-24 10:31:52.360601] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.250 [2024-07-24 10:31:52.360608] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.250 [2024-07-24 10:31:52.360613] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.250 [2024-07-24 10:31:52.360618] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.250 [2024-07-24 10:31:52.360662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.250 [2024-07-24 10:31:52.360761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.250 [2024-07-24 10:31:52.360853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.250 [2024-07-24 10:31:52.360853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:45.250 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:45.251 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:45.251 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.251 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.251 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:45.251 [2024-07-24 10:31:52.673729] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7386a0/0x73cb70) succeed. 00:09:45.251 [2024-07-24 10:31:52.683089] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x739c90/0x77e200) succeed. 00:09:45.558 10:31:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:45.817 10:31:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:45.818 10:31:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:45.818 10:31:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:45.818 10:31:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.076 10:31:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:46.076 10:31:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.333 10:31:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:46.333 10:31:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:46.592 10:31:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.592 10:31:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:46.592 10:31:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.852 10:31:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:46.852 10:31:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:47.112 10:31:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:47.112 10:31:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:47.371 10:31:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:47.371 10:31:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:47.371 10:31:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:47.629 10:31:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:47.629 10:31:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:47.888 10:31:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:47.888 [2024-07-24 10:31:55.306180] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:47.888 10:31:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:48.146 10:31:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:48.404 10:31:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:49.340 10:31:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:49.340 10:31:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:49.340 10:31:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:49.340 10:31:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:49.340 10:31:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:49.340 10:31:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:51.240 10:31:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:51.240 10:31:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:51.240 10:31:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:51.240 10:31:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:51.240 10:31:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:51.240 10:31:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:51.240 10:31:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:51.240 [global] 00:09:51.240 thread=1 00:09:51.240 invalidate=1 00:09:51.240 rw=write 00:09:51.240 time_based=1 00:09:51.240 runtime=1 00:09:51.240 ioengine=libaio 00:09:51.240 direct=1 00:09:51.240 bs=4096 00:09:51.240 iodepth=1 00:09:51.240 norandommap=0 00:09:51.240 numjobs=1 00:09:51.240 00:09:51.240 verify_dump=1 00:09:51.240 verify_backlog=512 00:09:51.240 verify_state_save=0 00:09:51.240 do_verify=1 00:09:51.240 verify=crc32c-intel 00:09:51.240 [job0] 00:09:51.240 filename=/dev/nvme0n1 00:09:51.527 [job1] 00:09:51.527 filename=/dev/nvme0n2 00:09:51.527 [job2] 00:09:51.527 filename=/dev/nvme0n3 00:09:51.527 [job3] 00:09:51.527 filename=/dev/nvme0n4 00:09:51.527 Could not set queue depth (nvme0n1) 00:09:51.527 Could not set queue depth (nvme0n2) 00:09:51.527 Could not set queue depth (nvme0n3) 00:09:51.527 Could not set queue depth (nvme0n4) 00:09:51.787 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.787 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.787 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.787 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.787 fio-3.35 00:09:51.787 Starting 4 threads 00:09:53.154 00:09:53.154 job0: (groupid=0, jobs=1): err= 0: pid=2116372: Wed Jul 24 10:32:00 2024 00:09:53.154 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:09:53.154 slat (nsec): min=5816, max=28316, avg=7123.98, stdev=1459.83 00:09:53.154 clat (usec): min=63, max=264, avg=112.50, stdev=12.61 00:09:53.154 lat (usec): min=70, max=271, avg=119.62, stdev=12.68 00:09:53.154 clat percentiles (usec): 00:09:53.154 | 1.00th=[ 78], 5.00th=[ 95], 10.00th=[ 101], 20.00th=[ 105], 00:09:53.154 | 30.00th=[ 109], 40.00th=[ 111], 50.00th=[ 113], 60.00th=[ 115], 00:09:53.154 | 70.00th=[ 117], 80.00th=[ 120], 90.00th=[ 124], 95.00th=[ 133], 00:09:53.154 | 99.00th=[ 153], 99.50th=[ 161], 99.90th=[ 180], 99.95th=[ 188], 00:09:53.154 | 99.99th=[ 265] 00:09:53.154 write: IOPS=4188, BW=16.4MiB/s (17.2MB/s)(16.4MiB/1001msec); 0 zone resets 00:09:53.154 slat (usec): min=7, max=101, avg= 9.38, stdev= 2.63 00:09:53.154 clat (usec): min=59, max=240, avg=108.05, stdev=18.01 00:09:53.154 lat (usec): min=67, max=260, avg=117.43, stdev=18.93 00:09:53.154 clat percentiles (usec): 00:09:53.154 | 1.00th=[ 71], 5.00th=[ 78], 10.00th=[ 94], 20.00th=[ 99], 00:09:53.154 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 109], 00:09:53.154 | 70.00th=[ 111], 80.00th=[ 115], 90.00th=[ 131], 95.00th=[ 143], 00:09:53.154 | 99.00th=[ 172], 99.50th=[ 186], 99.90th=[ 210], 99.95th=[ 215], 00:09:53.154 | 99.99th=[ 241] 00:09:53.154 bw ( KiB/s): min=17184, max=17184, per=25.95%, avg=17184.00, stdev= 0.00, samples=1 00:09:53.154 iops : min= 4296, max= 4296, avg=4296.00, stdev= 0.00, samples=1 00:09:53.154 lat (usec) : 100=16.48%, 250=83.51%, 500=0.01% 00:09:53.154 cpu : usr=4.30%, sys=9.60%, ctx=8290, majf=0, minf=1 00:09:53.154 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.154 issued rwts: total=4096,4193,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.154 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.154 job1: (groupid=0, jobs=1): err= 0: pid=2116373: Wed Jul 24 10:32:00 2024 00:09:53.154 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:09:53.154 slat (nsec): min=5880, max=26675, avg=7000.97, stdev=892.29 00:09:53.154 clat (usec): min=64, max=291, avg=112.21, stdev=12.48 00:09:53.154 lat (usec): min=72, max=298, avg=119.21, stdev=12.46 00:09:53.154 clat percentiles (usec): 00:09:53.154 | 1.00th=[ 74], 5.00th=[ 93], 10.00th=[ 101], 20.00th=[ 105], 00:09:53.154 | 30.00th=[ 109], 40.00th=[ 111], 50.00th=[ 113], 60.00th=[ 115], 00:09:53.154 | 70.00th=[ 117], 80.00th=[ 120], 90.00th=[ 124], 95.00th=[ 130], 00:09:53.154 | 99.00th=[ 151], 99.50th=[ 155], 99.90th=[ 163], 99.95th=[ 167], 00:09:53.154 | 99.99th=[ 293] 00:09:53.154 write: IOPS=4182, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1001msec); 0 zone resets 00:09:53.154 slat (nsec): min=7856, max=56961, avg=9161.89, stdev=1755.22 00:09:53.154 clat (usec): min=61, max=232, avg=108.93, stdev=17.89 00:09:53.154 lat (usec): min=70, max=248, avg=118.10, stdev=18.71 00:09:53.155 clat percentiles (usec): 00:09:53.155 | 1.00th=[ 71], 5.00th=[ 83], 10.00th=[ 95], 20.00th=[ 99], 00:09:53.155 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 109], 00:09:53.155 | 70.00th=[ 112], 80.00th=[ 117], 90.00th=[ 133], 95.00th=[ 145], 00:09:53.155 | 99.00th=[ 178], 99.50th=[ 190], 99.90th=[ 210], 99.95th=[ 219], 00:09:53.155 | 99.99th=[ 233] 00:09:53.155 bw ( KiB/s): min=17176, max=17176, per=25.94%, avg=17176.00, stdev= 0.00, samples=1 00:09:53.155 iops : min= 4294, max= 4294, avg=4294.00, stdev= 0.00, samples=1 00:09:53.155 lat (usec) : 100=15.68%, 250=84.31%, 500=0.01% 00:09:53.155 cpu : usr=5.00%, sys=8.80%, ctx=8285, majf=0, minf=1 00:09:53.155 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.155 issued rwts: total=4096,4187,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.155 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.155 job2: (groupid=0, jobs=1): err= 0: pid=2116374: Wed Jul 24 10:32:00 2024 00:09:53.155 read: IOPS=3582, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:09:53.155 slat (nsec): min=5762, max=28288, avg=7149.69, stdev=1099.57 00:09:53.155 clat (usec): min=72, max=210, avg=124.88, stdev=18.42 00:09:53.155 lat (usec): min=80, max=218, avg=132.03, stdev=18.44 00:09:53.155 clat percentiles (usec): 00:09:53.155 | 1.00th=[ 81], 5.00th=[ 87], 10.00th=[ 94], 20.00th=[ 119], 00:09:53.155 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 128], 00:09:53.155 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 145], 95.00th=[ 161], 00:09:53.155 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 196], 99.95th=[ 202], 00:09:53.155 | 99.99th=[ 210] 00:09:53.155 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:09:53.155 slat (nsec): min=8064, max=33923, avg=9165.96, stdev=1249.34 00:09:53.155 clat (usec): min=67, max=315, avg=115.56, stdev=21.42 00:09:53.155 lat (usec): min=75, max=335, avg=124.73, stdev=21.50 00:09:53.155 clat percentiles (usec): 00:09:53.155 | 1.00th=[ 77], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 91], 00:09:53.155 | 30.00th=[ 114], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 121], 00:09:53.155 | 70.00th=[ 123], 80.00th=[ 126], 90.00th=[ 147], 95.00th=[ 155], 00:09:53.155 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 190], 99.95th=[ 204], 00:09:53.155 | 99.99th=[ 318] 00:09:53.155 bw ( KiB/s): min=16384, max=16384, per=24.74%, avg=16384.00, stdev= 0.00, samples=1 00:09:53.155 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:53.155 lat (usec) : 100=17.44%, 250=82.54%, 500=0.01% 00:09:53.155 cpu : usr=4.50%, sys=8.40%, ctx=7682, majf=0, minf=2 00:09:53.155 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.155 issued rwts: total=3586,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.155 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.155 job3: (groupid=0, jobs=1): err= 0: pid=2116375: Wed Jul 24 10:32:00 2024 00:09:53.155 read: IOPS=3658, BW=14.3MiB/s (15.0MB/s)(14.3MiB/1001msec) 00:09:53.155 slat (nsec): min=5875, max=24008, avg=6986.17, stdev=870.54 00:09:53.155 clat (usec): min=72, max=212, avg=124.03, stdev=19.27 00:09:53.155 lat (usec): min=79, max=223, avg=131.01, stdev=19.28 00:09:53.155 clat percentiles (usec): 00:09:53.155 | 1.00th=[ 81], 5.00th=[ 87], 10.00th=[ 91], 20.00th=[ 118], 00:09:53.155 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 128], 00:09:53.155 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 143], 95.00th=[ 161], 00:09:53.155 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 196], 99.95th=[ 204], 00:09:53.155 | 99.99th=[ 212] 00:09:53.155 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:09:53.155 slat (nsec): min=8042, max=37406, avg=9065.76, stdev=1297.90 00:09:53.155 clat (usec): min=71, max=277, avg=114.12, stdev=20.53 00:09:53.155 lat (usec): min=80, max=289, avg=123.18, stdev=20.45 00:09:53.155 clat percentiles (usec): 00:09:53.155 | 1.00th=[ 78], 5.00th=[ 81], 10.00th=[ 84], 20.00th=[ 90], 00:09:53.155 | 30.00th=[ 113], 40.00th=[ 116], 50.00th=[ 118], 60.00th=[ 120], 00:09:53.155 | 70.00th=[ 122], 80.00th=[ 125], 90.00th=[ 139], 95.00th=[ 153], 00:09:53.155 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 198], 99.95th=[ 227], 00:09:53.155 | 99.99th=[ 277] 00:09:53.155 bw ( KiB/s): min=16384, max=16384, per=24.74%, avg=16384.00, stdev= 0.00, samples=1 00:09:53.155 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:53.155 lat (usec) : 100=19.44%, 250=80.55%, 500=0.01% 00:09:53.155 cpu : usr=4.00%, sys=8.70%, ctx=7758, majf=0, minf=1 00:09:53.155 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.155 issued rwts: total=3662,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.155 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.155 00:09:53.155 Run status group 0 (all jobs): 00:09:53.155 READ: bw=60.3MiB/s (63.2MB/s), 14.0MiB/s-16.0MiB/s (14.7MB/s-16.8MB/s), io=60.3MiB (63.2MB), run=1001-1001msec 00:09:53.155 WRITE: bw=64.7MiB/s (67.8MB/s), 16.0MiB/s-16.4MiB/s (16.8MB/s-17.2MB/s), io=64.7MiB (67.9MB), run=1001-1001msec 00:09:53.155 00:09:53.155 Disk stats (read/write): 00:09:53.155 nvme0n1: ios=3633/3656, merge=0/0, ticks=389/354, in_queue=743, util=87.16% 00:09:53.155 nvme0n2: ios=3584/3665, merge=0/0, ticks=376/358, in_queue=734, util=87.32% 00:09:53.155 nvme0n3: ios=3072/3530, merge=0/0, ticks=354/378, in_queue=732, util=89.22% 00:09:53.155 nvme0n4: ios=3072/3551, merge=0/0, ticks=375/381, in_queue=756, util=89.78% 00:09:53.155 10:32:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:53.155 [global] 00:09:53.155 thread=1 00:09:53.155 invalidate=1 00:09:53.155 rw=randwrite 00:09:53.155 time_based=1 00:09:53.155 runtime=1 00:09:53.155 ioengine=libaio 00:09:53.155 direct=1 00:09:53.155 bs=4096 00:09:53.155 iodepth=1 00:09:53.155 norandommap=0 00:09:53.155 numjobs=1 00:09:53.155 00:09:53.155 verify_dump=1 00:09:53.155 verify_backlog=512 00:09:53.155 verify_state_save=0 00:09:53.155 do_verify=1 00:09:53.155 verify=crc32c-intel 00:09:53.155 [job0] 00:09:53.155 filename=/dev/nvme0n1 00:09:53.155 [job1] 00:09:53.155 filename=/dev/nvme0n2 00:09:53.155 [job2] 00:09:53.155 filename=/dev/nvme0n3 00:09:53.155 [job3] 00:09:53.155 filename=/dev/nvme0n4 00:09:53.155 Could not set queue depth (nvme0n1) 00:09:53.155 Could not set queue depth (nvme0n2) 00:09:53.155 Could not set queue depth (nvme0n3) 00:09:53.155 Could not set queue depth (nvme0n4) 00:09:53.155 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.155 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.155 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.155 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.155 fio-3.35 00:09:53.155 Starting 4 threads 00:09:54.522 00:09:54.522 job0: (groupid=0, jobs=1): err= 0: pid=2116741: Wed Jul 24 10:32:01 2024 00:09:54.522 read: IOPS=3584, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1000msec) 00:09:54.522 slat (nsec): min=5774, max=25932, avg=6874.58, stdev=908.00 00:09:54.522 clat (usec): min=66, max=202, avg=125.20, stdev=22.46 00:09:54.522 lat (usec): min=73, max=209, avg=132.08, stdev=22.47 00:09:54.522 clat percentiles (usec): 00:09:54.522 | 1.00th=[ 75], 5.00th=[ 84], 10.00th=[ 90], 20.00th=[ 114], 00:09:54.522 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 129], 00:09:54.522 | 70.00th=[ 133], 80.00th=[ 139], 90.00th=[ 157], 95.00th=[ 165], 00:09:54.522 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 190], 99.95th=[ 194], 00:09:54.522 | 99.99th=[ 204] 00:09:54.522 write: IOPS=4046, BW=15.8MiB/s (16.6MB/s)(15.8MiB/1000msec); 0 zone resets 00:09:54.522 slat (nsec): min=7689, max=37016, avg=8624.95, stdev=1135.12 00:09:54.522 clat (usec): min=62, max=313, avg=117.73, stdev=21.61 00:09:54.522 lat (usec): min=71, max=321, avg=126.36, stdev=21.64 00:09:54.522 clat percentiles (usec): 00:09:54.522 | 1.00th=[ 72], 5.00th=[ 80], 10.00th=[ 85], 20.00th=[ 105], 00:09:54.522 | 30.00th=[ 112], 40.00th=[ 115], 50.00th=[ 118], 60.00th=[ 121], 00:09:54.522 | 70.00th=[ 125], 80.00th=[ 131], 90.00th=[ 149], 95.00th=[ 155], 00:09:54.522 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 188], 99.95th=[ 200], 00:09:54.522 | 99.99th=[ 314] 00:09:54.522 bw ( KiB/s): min=16384, max=16384, per=22.30%, avg=16384.00, stdev= 0.00, samples=1 00:09:54.522 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:54.522 lat (usec) : 100=15.75%, 250=84.22%, 500=0.03% 00:09:54.522 cpu : usr=3.80%, sys=8.70%, ctx=7630, majf=0, minf=1 00:09:54.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.522 issued rwts: total=3584,4046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.522 job1: (groupid=0, jobs=1): err= 0: pid=2116742: Wed Jul 24 10:32:01 2024 00:09:54.522 read: IOPS=3589, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:09:54.522 slat (nsec): min=5753, max=25111, avg=6838.50, stdev=870.76 00:09:54.522 clat (usec): min=56, max=296, avg=124.59, stdev=24.25 00:09:54.522 lat (usec): min=72, max=303, avg=131.42, stdev=24.23 00:09:54.522 clat percentiles (usec): 00:09:54.522 | 1.00th=[ 74], 5.00th=[ 83], 10.00th=[ 88], 20.00th=[ 109], 00:09:54.522 | 30.00th=[ 118], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 129], 00:09:54.522 | 70.00th=[ 133], 80.00th=[ 141], 90.00th=[ 159], 95.00th=[ 167], 00:09:54.522 | 99.00th=[ 180], 99.50th=[ 182], 99.90th=[ 190], 99.95th=[ 200], 00:09:54.522 | 99.99th=[ 297] 00:09:54.522 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:09:54.522 slat (nsec): min=7672, max=33701, avg=8623.77, stdev=1037.97 00:09:54.522 clat (usec): min=64, max=312, avg=116.45, stdev=22.60 00:09:54.522 lat (usec): min=72, max=321, avg=125.08, stdev=22.65 00:09:54.522 clat percentiles (usec): 00:09:54.522 | 1.00th=[ 71], 5.00th=[ 78], 10.00th=[ 83], 20.00th=[ 102], 00:09:54.522 | 30.00th=[ 111], 40.00th=[ 115], 50.00th=[ 118], 60.00th=[ 121], 00:09:54.522 | 70.00th=[ 125], 80.00th=[ 133], 90.00th=[ 149], 95.00th=[ 157], 00:09:54.522 | 99.00th=[ 167], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 184], 00:09:54.522 | 99.99th=[ 314] 00:09:54.522 bw ( KiB/s): min=16384, max=16384, per=22.30%, avg=16384.00, stdev= 0.00, samples=1 00:09:54.522 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:54.522 lat (usec) : 100=19.20%, 250=80.78%, 500=0.03% 00:09:54.522 cpu : usr=5.00%, sys=7.50%, ctx=7689, majf=0, minf=1 00:09:54.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.522 issued rwts: total=3593,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.522 job2: (groupid=0, jobs=1): err= 0: pid=2116747: Wed Jul 24 10:32:01 2024 00:09:54.522 read: IOPS=4888, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1001msec) 00:09:54.522 slat (nsec): min=6062, max=22371, avg=7298.48, stdev=1044.21 00:09:54.522 clat (usec): min=72, max=295, avg=92.57, stdev=15.27 00:09:54.523 lat (usec): min=80, max=302, avg=99.86, stdev=15.26 00:09:54.523 clat percentiles (usec): 00:09:54.523 | 1.00th=[ 78], 5.00th=[ 81], 10.00th=[ 82], 20.00th=[ 84], 00:09:54.523 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 88], 60.00th=[ 90], 00:09:54.523 | 70.00th=[ 92], 80.00th=[ 96], 90.00th=[ 119], 95.00th=[ 129], 00:09:54.523 | 99.00th=[ 151], 99.50th=[ 163], 99.90th=[ 176], 99.95th=[ 184], 00:09:54.523 | 99.99th=[ 297] 00:09:54.523 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:09:54.523 slat (nsec): min=7939, max=34956, avg=8906.95, stdev=1169.48 00:09:54.523 clat (usec): min=59, max=170, avg=87.09, stdev=13.31 00:09:54.523 lat (usec): min=77, max=178, avg=96.00, stdev=13.35 00:09:54.523 clat percentiles (usec): 00:09:54.523 | 1.00th=[ 74], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 80], 00:09:54.523 | 30.00th=[ 81], 40.00th=[ 82], 50.00th=[ 84], 60.00th=[ 86], 00:09:54.523 | 70.00th=[ 88], 80.00th=[ 91], 90.00th=[ 104], 95.00th=[ 120], 00:09:54.523 | 99.00th=[ 141], 99.50th=[ 153], 99.90th=[ 165], 99.95th=[ 169], 00:09:54.523 | 99.99th=[ 172] 00:09:54.523 bw ( KiB/s): min=21144, max=21144, per=28.79%, avg=21144.00, stdev= 0.00, samples=1 00:09:54.523 iops : min= 5286, max= 5286, avg=5286.00, stdev= 0.00, samples=1 00:09:54.523 lat (usec) : 100=87.35%, 250=12.64%, 500=0.01% 00:09:54.523 cpu : usr=5.30%, sys=11.20%, ctx=10014, majf=0, minf=1 00:09:54.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.523 issued rwts: total=4893,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.523 job3: (groupid=0, jobs=1): err= 0: pid=2116749: Wed Jul 24 10:32:01 2024 00:09:54.523 read: IOPS=4936, BW=19.3MiB/s (20.2MB/s)(19.3MiB/1001msec) 00:09:54.523 slat (nsec): min=5938, max=24991, avg=7142.19, stdev=838.76 00:09:54.523 clat (usec): min=72, max=1066, avg=91.73, stdev=19.65 00:09:54.523 lat (usec): min=79, max=1073, avg=98.87, stdev=19.68 00:09:54.523 clat percentiles (usec): 00:09:54.523 | 1.00th=[ 78], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 84], 00:09:54.523 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 88], 60.00th=[ 90], 00:09:54.523 | 70.00th=[ 92], 80.00th=[ 95], 90.00th=[ 106], 95.00th=[ 126], 00:09:54.523 | 99.00th=[ 151], 99.50th=[ 161], 99.90th=[ 174], 99.95th=[ 178], 00:09:54.523 | 99.99th=[ 1074] 00:09:54.523 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:09:54.523 slat (nsec): min=7788, max=40946, avg=8838.63, stdev=1018.49 00:09:54.523 clat (usec): min=69, max=179, avg=87.16, stdev=13.36 00:09:54.523 lat (usec): min=78, max=188, avg=95.99, stdev=13.42 00:09:54.523 clat percentiles (usec): 00:09:54.523 | 1.00th=[ 74], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 80], 00:09:54.523 | 30.00th=[ 81], 40.00th=[ 83], 50.00th=[ 84], 60.00th=[ 86], 00:09:54.523 | 70.00th=[ 88], 80.00th=[ 91], 90.00th=[ 103], 95.00th=[ 120], 00:09:54.523 | 99.00th=[ 135], 99.50th=[ 155], 99.90th=[ 165], 99.95th=[ 172], 00:09:54.523 | 99.99th=[ 180] 00:09:54.523 bw ( KiB/s): min=21264, max=21264, per=28.95%, avg=21264.00, stdev= 0.00, samples=1 00:09:54.523 iops : min= 5316, max= 5316, avg=5316.00, stdev= 0.00, samples=1 00:09:54.523 lat (usec) : 100=88.51%, 250=11.48% 00:09:54.523 lat (msec) : 2=0.01% 00:09:54.523 cpu : usr=4.30%, sys=12.10%, ctx=10061, majf=0, minf=2 00:09:54.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.523 issued rwts: total=4941,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.523 00:09:54.523 Run status group 0 (all jobs): 00:09:54.523 READ: bw=66.4MiB/s (69.6MB/s), 14.0MiB/s-19.3MiB/s (14.7MB/s-20.2MB/s), io=66.4MiB (69.7MB), run=1000-1001msec 00:09:54.523 WRITE: bw=71.7MiB/s (75.2MB/s), 15.8MiB/s-20.0MiB/s (16.6MB/s-20.9MB/s), io=71.8MiB (75.3MB), run=1000-1001msec 00:09:54.523 00:09:54.523 Disk stats (read/write): 00:09:54.523 nvme0n1: ios=3121/3450, merge=0/0, ticks=391/376, in_queue=767, util=87.37% 00:09:54.523 nvme0n2: ios=3072/3508, merge=0/0, ticks=355/387, in_queue=742, util=87.34% 00:09:54.523 nvme0n3: ios=4337/4608, merge=0/0, ticks=355/358, in_queue=713, util=89.23% 00:09:54.523 nvme0n4: ios=4386/4608, merge=0/0, ticks=358/348, in_queue=706, util=89.79% 00:09:54.523 10:32:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:54.523 [global] 00:09:54.523 thread=1 00:09:54.523 invalidate=1 00:09:54.523 rw=write 00:09:54.523 time_based=1 00:09:54.523 runtime=1 00:09:54.523 ioengine=libaio 00:09:54.523 direct=1 00:09:54.523 bs=4096 00:09:54.523 iodepth=128 00:09:54.523 norandommap=0 00:09:54.523 numjobs=1 00:09:54.523 00:09:54.523 verify_dump=1 00:09:54.523 verify_backlog=512 00:09:54.523 verify_state_save=0 00:09:54.523 do_verify=1 00:09:54.523 verify=crc32c-intel 00:09:54.523 [job0] 00:09:54.523 filename=/dev/nvme0n1 00:09:54.523 [job1] 00:09:54.523 filename=/dev/nvme0n2 00:09:54.523 [job2] 00:09:54.523 filename=/dev/nvme0n3 00:09:54.523 [job3] 00:09:54.523 filename=/dev/nvme0n4 00:09:54.523 Could not set queue depth (nvme0n1) 00:09:54.523 Could not set queue depth (nvme0n2) 00:09:54.523 Could not set queue depth (nvme0n3) 00:09:54.523 Could not set queue depth (nvme0n4) 00:09:54.779 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:54.779 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:54.779 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:54.779 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:54.779 fio-3.35 00:09:54.779 Starting 4 threads 00:09:56.169 00:09:56.169 job0: (groupid=0, jobs=1): err= 0: pid=2117125: Wed Jul 24 10:32:03 2024 00:09:56.169 read: IOPS=5438, BW=21.2MiB/s (22.3MB/s)(21.4MiB/1005msec) 00:09:56.169 slat (nsec): min=1325, max=3342.2k, avg=91101.68, stdev=318491.70 00:09:56.169 clat (usec): min=3997, max=15585, avg=11810.36, stdev=1312.71 00:09:56.169 lat (usec): min=4664, max=15904, avg=11901.46, stdev=1334.74 00:09:56.169 clat percentiles (usec): 00:09:56.169 | 1.00th=[ 8848], 5.00th=[10290], 10.00th=[10421], 20.00th=[10814], 00:09:56.169 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11994], 00:09:56.169 | 70.00th=[12780], 80.00th=[13304], 90.00th=[13566], 95.00th=[13698], 00:09:56.169 | 99.00th=[14484], 99.50th=[14746], 99.90th=[15401], 99.95th=[15533], 00:09:56.169 | 99.99th=[15533] 00:09:56.169 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:09:56.169 slat (nsec): min=1812, max=2885.0k, avg=86267.99, stdev=300172.66 00:09:56.169 clat (usec): min=6170, max=15684, avg=11132.19, stdev=1155.01 00:09:56.169 lat (usec): min=6206, max=15707, avg=11218.46, stdev=1177.35 00:09:56.169 clat percentiles (usec): 00:09:56.169 | 1.00th=[ 9372], 5.00th=[ 9765], 10.00th=[ 9896], 20.00th=[10159], 00:09:56.169 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:09:56.169 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12911], 95.00th=[13042], 00:09:56.169 | 99.00th=[13304], 99.50th=[13698], 99.90th=[14484], 99.95th=[14746], 00:09:56.169 | 99.99th=[15664] 00:09:56.169 bw ( KiB/s): min=20480, max=24576, per=23.99%, avg=22528.00, stdev=2896.31, samples=2 00:09:56.169 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:09:56.169 lat (msec) : 4=0.01%, 10=7.03%, 20=92.96% 00:09:56.169 cpu : usr=2.39%, sys=3.69%, ctx=1024, majf=0, minf=1 00:09:56.169 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:56.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.169 issued rwts: total=5466,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.169 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.169 job1: (groupid=0, jobs=1): err= 0: pid=2117126: Wed Jul 24 10:32:03 2024 00:09:56.169 read: IOPS=5365, BW=21.0MiB/s (22.0MB/s)(21.1MiB/1005msec) 00:09:56.169 slat (nsec): min=1377, max=3422.3k, avg=92383.01, stdev=348419.69 00:09:56.169 clat (usec): min=3958, max=16178, avg=11841.88, stdev=1379.99 00:09:56.169 lat (usec): min=5441, max=16711, avg=11934.27, stdev=1413.81 00:09:56.169 clat percentiles (usec): 00:09:56.169 | 1.00th=[ 9372], 5.00th=[10290], 10.00th=[10552], 20.00th=[10814], 00:09:56.169 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11863], 00:09:56.169 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13566], 95.00th=[13829], 00:09:56.169 | 99.00th=[15008], 99.50th=[15270], 99.90th=[15401], 99.95th=[15533], 00:09:56.169 | 99.99th=[16188] 00:09:56.169 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:09:56.169 slat (nsec): min=1920, max=2753.7k, avg=86021.20, stdev=326308.28 00:09:56.169 clat (usec): min=7681, max=15528, avg=11245.27, stdev=1232.58 00:09:56.169 lat (usec): min=7720, max=15539, avg=11331.29, stdev=1268.78 00:09:56.169 clat percentiles (usec): 00:09:56.169 | 1.00th=[ 9634], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10159], 00:09:56.169 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:09:56.169 | 70.00th=[12387], 80.00th=[12649], 90.00th=[12911], 95.00th=[13173], 00:09:56.169 | 99.00th=[14091], 99.50th=[14484], 99.90th=[14877], 99.95th=[15270], 00:09:56.169 | 99.99th=[15533] 00:09:56.169 bw ( KiB/s): min=20480, max=24576, per=23.99%, avg=22528.00, stdev=2896.31, samples=2 00:09:56.169 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:09:56.169 lat (msec) : 4=0.01%, 10=5.52%, 20=94.48% 00:09:56.169 cpu : usr=2.29%, sys=4.28%, ctx=921, majf=0, minf=1 00:09:56.169 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:56.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.169 issued rwts: total=5392,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.169 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.169 job2: (groupid=0, jobs=1): err= 0: pid=2117127: Wed Jul 24 10:32:03 2024 00:09:56.169 read: IOPS=6090, BW=23.8MiB/s (24.9MB/s)(23.9MiB/1003msec) 00:09:56.169 slat (nsec): min=1364, max=1644.0k, avg=83126.32, stdev=250582.16 00:09:56.169 clat (usec): min=1936, max=14789, avg=10662.56, stdev=2638.07 00:09:56.169 lat (usec): min=2954, max=14796, avg=10745.68, stdev=2650.54 00:09:56.169 clat percentiles (usec): 00:09:56.169 | 1.00th=[ 7046], 5.00th=[ 7308], 10.00th=[ 7504], 20.00th=[ 7963], 00:09:56.169 | 30.00th=[ 8291], 40.00th=[ 8848], 50.00th=[ 9634], 60.00th=[12911], 00:09:56.169 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13566], 95.00th=[13698], 00:09:56.169 | 99.00th=[13829], 99.50th=[13829], 99.90th=[14746], 99.95th=[14746], 00:09:56.169 | 99.99th=[14746] 00:09:56.169 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:09:56.169 slat (nsec): min=1937, max=1334.5k, avg=77290.07, stdev=234920.15 00:09:56.169 clat (usec): min=6717, max=13525, avg=10062.31, stdev=2457.12 00:09:56.169 lat (usec): min=6719, max=13529, avg=10139.60, stdev=2469.79 00:09:56.169 clat percentiles (usec): 00:09:56.169 | 1.00th=[ 6849], 5.00th=[ 6980], 10.00th=[ 7111], 20.00th=[ 7635], 00:09:56.169 | 30.00th=[ 7963], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[12256], 00:09:56.169 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13042], 95.00th=[13304], 00:09:56.169 | 99.00th=[13435], 99.50th=[13435], 99.90th=[13566], 99.95th=[13566], 00:09:56.169 | 99.99th=[13566] 00:09:56.169 bw ( KiB/s): min=19808, max=29344, per=26.18%, avg=24576.00, stdev=6742.97, samples=2 00:09:56.170 iops : min= 4952, max= 7336, avg=6144.00, stdev=1685.74, samples=2 00:09:56.170 lat (msec) : 2=0.01%, 4=0.15%, 10=53.17%, 20=46.67% 00:09:56.170 cpu : usr=3.19%, sys=3.79%, ctx=1492, majf=0, minf=1 00:09:56.170 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:56.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.170 issued rwts: total=6109,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.170 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.170 job3: (groupid=0, jobs=1): err= 0: pid=2117128: Wed Jul 24 10:32:03 2024 00:09:56.170 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:09:56.170 slat (nsec): min=1233, max=1501.0k, avg=82427.06, stdev=246837.22 00:09:56.170 clat (usec): min=6718, max=15582, avg=10697.66, stdev=2624.27 00:09:56.170 lat (usec): min=6726, max=15585, avg=10780.08, stdev=2640.90 00:09:56.170 clat percentiles (usec): 00:09:56.170 | 1.00th=[ 7308], 5.00th=[ 7701], 10.00th=[ 7898], 20.00th=[ 8029], 00:09:56.170 | 30.00th=[ 8225], 40.00th=[ 8291], 50.00th=[ 9241], 60.00th=[13173], 00:09:56.170 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13566], 95.00th=[13698], 00:09:56.170 | 99.00th=[13829], 99.50th=[13829], 99.90th=[15533], 99.95th=[15533], 00:09:56.170 | 99.99th=[15533] 00:09:56.170 write: IOPS=6162, BW=24.1MiB/s (25.2MB/s)(24.1MiB/1003msec); 0 zone resets 00:09:56.170 slat (nsec): min=1936, max=1610.9k, avg=76669.37, stdev=232135.60 00:09:56.170 clat (usec): min=1963, max=13522, avg=9903.91, stdev=2457.12 00:09:56.170 lat (usec): min=2988, max=13526, avg=9980.58, stdev=2472.22 00:09:56.170 clat percentiles (usec): 00:09:56.170 | 1.00th=[ 6980], 5.00th=[ 7439], 10.00th=[ 7570], 20.00th=[ 7701], 00:09:56.170 | 30.00th=[ 7832], 40.00th=[ 7963], 50.00th=[ 8160], 60.00th=[11994], 00:09:56.170 | 70.00th=[12518], 80.00th=[12780], 90.00th=[12911], 95.00th=[13173], 00:09:56.170 | 99.00th=[13435], 99.50th=[13435], 99.90th=[13435], 99.95th=[13435], 00:09:56.170 | 99.99th=[13566] 00:09:56.170 bw ( KiB/s): min=19184, max=29968, per=26.18%, avg=24576.00, stdev=7625.44, samples=2 00:09:56.170 iops : min= 4796, max= 7492, avg=6144.00, stdev=1906.36, samples=2 00:09:56.170 lat (msec) : 2=0.01%, 4=0.07%, 10=53.60%, 20=46.32% 00:09:56.170 cpu : usr=2.40%, sys=4.39%, ctx=1369, majf=0, minf=1 00:09:56.170 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:56.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.170 issued rwts: total=6144,6181,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.170 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.170 00:09:56.170 Run status group 0 (all jobs): 00:09:56.170 READ: bw=89.8MiB/s (94.2MB/s), 21.0MiB/s-23.9MiB/s (22.0MB/s-25.1MB/s), io=90.3MiB (94.7MB), run=1003-1005msec 00:09:56.170 WRITE: bw=91.7MiB/s (96.1MB/s), 21.9MiB/s-24.1MiB/s (23.0MB/s-25.2MB/s), io=92.1MiB (96.6MB), run=1003-1005msec 00:09:56.170 00:09:56.170 Disk stats (read/write): 00:09:56.170 nvme0n1: ios=4658/4773, merge=0/0, ticks=27032/26315, in_queue=53347, util=87.17% 00:09:56.170 nvme0n2: ios=4613/4710, merge=0/0, ticks=27134/26205, in_queue=53339, util=87.35% 00:09:56.170 nvme0n3: ios=5235/5632, merge=0/0, ticks=13414/13600, in_queue=27014, util=89.25% 00:09:56.170 nvme0n4: ios=5313/5632, merge=0/0, ticks=13669/13435, in_queue=27104, util=89.80% 00:09:56.170 10:32:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:56.170 [global] 00:09:56.170 thread=1 00:09:56.170 invalidate=1 00:09:56.170 rw=randwrite 00:09:56.170 time_based=1 00:09:56.170 runtime=1 00:09:56.170 ioengine=libaio 00:09:56.170 direct=1 00:09:56.170 bs=4096 00:09:56.170 iodepth=128 00:09:56.170 norandommap=0 00:09:56.170 numjobs=1 00:09:56.170 00:09:56.170 verify_dump=1 00:09:56.170 verify_backlog=512 00:09:56.170 verify_state_save=0 00:09:56.170 do_verify=1 00:09:56.170 verify=crc32c-intel 00:09:56.170 [job0] 00:09:56.170 filename=/dev/nvme0n1 00:09:56.170 [job1] 00:09:56.170 filename=/dev/nvme0n2 00:09:56.170 [job2] 00:09:56.170 filename=/dev/nvme0n3 00:09:56.170 [job3] 00:09:56.170 filename=/dev/nvme0n4 00:09:56.170 Could not set queue depth (nvme0n1) 00:09:56.170 Could not set queue depth (nvme0n2) 00:09:56.170 Could not set queue depth (nvme0n3) 00:09:56.170 Could not set queue depth (nvme0n4) 00:09:56.428 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:56.428 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:56.428 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:56.428 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:56.428 fio-3.35 00:09:56.428 Starting 4 threads 00:09:57.795 00:09:57.795 job0: (groupid=0, jobs=1): err= 0: pid=2117497: Wed Jul 24 10:32:04 2024 00:09:57.795 read: IOPS=9206, BW=36.0MiB/s (37.7MB/s)(36.0MiB/1001msec) 00:09:57.795 slat (nsec): min=1398, max=1395.6k, avg=53692.17, stdev=193306.78 00:09:57.795 clat (usec): min=5607, max=8908, avg=7045.58, stdev=465.42 00:09:57.795 lat (usec): min=5613, max=8911, avg=7099.27, stdev=480.23 00:09:57.795 clat percentiles (usec): 00:09:57.795 | 1.00th=[ 6063], 5.00th=[ 6390], 10.00th=[ 6521], 20.00th=[ 6652], 00:09:57.795 | 30.00th=[ 6783], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:09:57.795 | 70.00th=[ 7242], 80.00th=[ 7439], 90.00th=[ 7635], 95.00th=[ 7898], 00:09:57.795 | 99.00th=[ 8291], 99.50th=[ 8455], 99.90th=[ 8586], 99.95th=[ 8717], 00:09:57.795 | 99.99th=[ 8848] 00:09:57.795 write: IOPS=9318, BW=36.4MiB/s (38.2MB/s)(36.4MiB/1001msec); 0 zone resets 00:09:57.795 slat (nsec): min=1863, max=1332.5k, avg=50814.47, stdev=179440.79 00:09:57.795 clat (usec): min=479, max=8388, avg=6627.37, stdev=584.52 00:09:57.795 lat (usec): min=1106, max=8395, avg=6678.19, stdev=596.94 00:09:57.795 clat percentiles (usec): 00:09:57.795 | 1.00th=[ 5211], 5.00th=[ 5997], 10.00th=[ 6128], 20.00th=[ 6259], 00:09:57.795 | 30.00th=[ 6390], 40.00th=[ 6521], 50.00th=[ 6587], 60.00th=[ 6783], 00:09:57.795 | 70.00th=[ 6915], 80.00th=[ 7046], 90.00th=[ 7242], 95.00th=[ 7439], 00:09:57.796 | 99.00th=[ 7832], 99.50th=[ 7898], 99.90th=[ 8160], 99.95th=[ 8225], 00:09:57.796 | 99.99th=[ 8455] 00:09:57.796 bw ( KiB/s): min=36864, max=36864, per=33.84%, avg=36864.00, stdev= 0.00, samples=1 00:09:57.796 iops : min= 9216, max= 9216, avg=9216.00, stdev= 0.00, samples=1 00:09:57.796 lat (usec) : 500=0.01% 00:09:57.796 lat (msec) : 2=0.17%, 4=0.17%, 10=99.65% 00:09:57.796 cpu : usr=4.00%, sys=7.20%, ctx=1345, majf=0, minf=1 00:09:57.796 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:57.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.796 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.796 issued rwts: total=9216,9328,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.796 job1: (groupid=0, jobs=1): err= 0: pid=2117498: Wed Jul 24 10:32:04 2024 00:09:57.796 read: IOPS=9488, BW=37.1MiB/s (38.9MB/s)(37.1MiB/1002msec) 00:09:57.796 slat (nsec): min=1373, max=1568.8k, avg=51798.28, stdev=187242.31 00:09:57.796 clat (usec): min=467, max=8569, avg=6777.09, stdev=552.22 00:09:57.796 lat (usec): min=1201, max=8580, avg=6828.89, stdev=551.26 00:09:57.796 clat percentiles (usec): 00:09:57.796 | 1.00th=[ 5538], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6521], 00:09:57.796 | 30.00th=[ 6652], 40.00th=[ 6718], 50.00th=[ 6783], 60.00th=[ 6849], 00:09:57.796 | 70.00th=[ 6915], 80.00th=[ 7046], 90.00th=[ 7308], 95.00th=[ 7570], 00:09:57.796 | 99.00th=[ 7963], 99.50th=[ 8094], 99.90th=[ 8356], 99.95th=[ 8455], 00:09:57.796 | 99.99th=[ 8586] 00:09:57.796 write: IOPS=9708, BW=37.9MiB/s (39.8MB/s)(38.0MiB/1002msec); 0 zone resets 00:09:57.796 slat (nsec): min=1868, max=1461.8k, avg=48894.03, stdev=175790.35 00:09:57.796 clat (usec): min=3624, max=9386, avg=6426.71, stdev=420.70 00:09:57.796 lat (usec): min=4153, max=9388, avg=6475.61, stdev=416.01 00:09:57.796 clat percentiles (usec): 00:09:57.796 | 1.00th=[ 5473], 5.00th=[ 5800], 10.00th=[ 5997], 20.00th=[ 6128], 00:09:57.796 | 30.00th=[ 6259], 40.00th=[ 6325], 50.00th=[ 6390], 60.00th=[ 6456], 00:09:57.796 | 70.00th=[ 6587], 80.00th=[ 6718], 90.00th=[ 6915], 95.00th=[ 7177], 00:09:57.796 | 99.00th=[ 7635], 99.50th=[ 7832], 99.90th=[ 8717], 99.95th=[ 9372], 00:09:57.796 | 99.99th=[ 9372] 00:09:57.796 bw ( KiB/s): min=37040, max=40784, per=35.72%, avg=38912.00, stdev=2647.41, samples=2 00:09:57.796 iops : min= 9260, max=10196, avg=9728.00, stdev=661.85, samples=2 00:09:57.796 lat (usec) : 500=0.01% 00:09:57.796 lat (msec) : 2=0.17%, 4=0.15%, 10=99.68% 00:09:57.796 cpu : usr=4.10%, sys=7.39%, ctx=1280, majf=0, minf=1 00:09:57.796 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:57.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.796 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.796 issued rwts: total=9507,9728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.796 job2: (groupid=0, jobs=1): err= 0: pid=2117499: Wed Jul 24 10:32:04 2024 00:09:57.796 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:09:57.796 slat (nsec): min=1536, max=1220.6k, avg=123115.33, stdev=314984.49 00:09:57.796 clat (usec): min=6751, max=17406, avg=15845.69, stdev=935.64 00:09:57.796 lat (usec): min=6757, max=17409, avg=15968.81, stdev=886.13 00:09:57.796 clat percentiles (usec): 00:09:57.796 | 1.00th=[10552], 5.00th=[14877], 10.00th=[15139], 20.00th=[15533], 00:09:57.796 | 30.00th=[15795], 40.00th=[15926], 50.00th=[16057], 60.00th=[16057], 00:09:57.796 | 70.00th=[16188], 80.00th=[16319], 90.00th=[16450], 95.00th=[16581], 00:09:57.796 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17433], 99.95th=[17433], 00:09:57.796 | 99.99th=[17433] 00:09:57.796 write: IOPS=4116, BW=16.1MiB/s (16.9MB/s)(16.1MiB/1003msec); 0 zone resets 00:09:57.796 slat (nsec): min=1962, max=1573.1k, avg=116109.45, stdev=296560.33 00:09:57.796 clat (usec): min=1810, max=17042, avg=14946.14, stdev=1085.93 00:09:57.796 lat (usec): min=2713, max=17045, avg=15062.25, stdev=1044.43 00:09:57.796 clat percentiles (usec): 00:09:57.796 | 1.00th=[12780], 5.00th=[14091], 10.00th=[14353], 20.00th=[14615], 00:09:57.796 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15139], 60.00th=[15139], 00:09:57.796 | 70.00th=[15270], 80.00th=[15401], 90.00th=[15664], 95.00th=[15795], 00:09:57.796 | 99.00th=[16057], 99.50th=[16188], 99.90th=[16188], 99.95th=[16188], 00:09:57.796 | 99.99th=[17171] 00:09:57.796 bw ( KiB/s): min=16384, max=16384, per=15.04%, avg=16384.00, stdev= 0.00, samples=2 00:09:57.796 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:57.796 lat (msec) : 2=0.01%, 4=0.17%, 10=0.61%, 20=99.21% 00:09:57.796 cpu : usr=2.10%, sys=4.29%, ctx=1193, majf=0, minf=1 00:09:57.796 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:57.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.796 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.796 issued rwts: total=4096,4129,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.796 job3: (groupid=0, jobs=1): err= 0: pid=2117500: Wed Jul 24 10:32:04 2024 00:09:57.796 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:09:57.796 slat (nsec): min=1533, max=1239.9k, avg=123172.75, stdev=315117.06 00:09:57.796 clat (usec): min=6740, max=17402, avg=15841.91, stdev=954.12 00:09:57.796 lat (usec): min=6745, max=17405, avg=15965.08, stdev=905.91 00:09:57.796 clat percentiles (usec): 00:09:57.796 | 1.00th=[10552], 5.00th=[14877], 10.00th=[15139], 20.00th=[15533], 00:09:57.796 | 30.00th=[15795], 40.00th=[15926], 50.00th=[16057], 60.00th=[16057], 00:09:57.796 | 70.00th=[16188], 80.00th=[16319], 90.00th=[16450], 95.00th=[16712], 00:09:57.796 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17433], 99.95th=[17433], 00:09:57.796 | 99.99th=[17433] 00:09:57.796 write: IOPS=4116, BW=16.1MiB/s (16.9MB/s)(16.1MiB/1003msec); 0 zone resets 00:09:57.796 slat (nsec): min=1999, max=1553.4k, avg=116108.05, stdev=296397.49 00:09:57.796 clat (usec): min=1808, max=16227, avg=14944.16, stdev=1101.18 00:09:57.796 lat (usec): min=2697, max=16235, avg=15060.27, stdev=1060.00 00:09:57.796 clat percentiles (usec): 00:09:57.796 | 1.00th=[13304], 5.00th=[14091], 10.00th=[14353], 20.00th=[14615], 00:09:57.796 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15139], 60.00th=[15139], 00:09:57.796 | 70.00th=[15270], 80.00th=[15401], 90.00th=[15664], 95.00th=[15795], 00:09:57.796 | 99.00th=[16057], 99.50th=[16188], 99.90th=[16188], 99.95th=[16188], 00:09:57.796 | 99.99th=[16188] 00:09:57.796 bw ( KiB/s): min=16384, max=16384, per=15.04%, avg=16384.00, stdev= 0.00, samples=2 00:09:57.796 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:57.796 lat (msec) : 2=0.01%, 4=0.21%, 10=0.57%, 20=99.21% 00:09:57.796 cpu : usr=2.00%, sys=4.29%, ctx=1174, majf=0, minf=1 00:09:57.796 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:57.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.796 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.796 issued rwts: total=4096,4129,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.796 00:09:57.796 Run status group 0 (all jobs): 00:09:57.796 READ: bw=105MiB/s (110MB/s), 16.0MiB/s-37.1MiB/s (16.7MB/s-38.9MB/s), io=105MiB (110MB), run=1001-1003msec 00:09:57.796 WRITE: bw=106MiB/s (112MB/s), 16.1MiB/s-37.9MiB/s (16.9MB/s-39.8MB/s), io=107MiB (112MB), run=1001-1003msec 00:09:57.796 00:09:57.796 Disk stats (read/write): 00:09:57.796 nvme0n1: ios=7750/8192, merge=0/0, ticks=13374/13148, in_queue=26522, util=87.26% 00:09:57.796 nvme0n2: ios=8192/8423, merge=0/0, ticks=22894/22469, in_queue=45363, util=87.42% 00:09:57.796 nvme0n3: ios=3474/3584, merge=0/0, ticks=13833/13345, in_queue=27178, util=89.23% 00:09:57.796 nvme0n4: ios=3471/3584, merge=0/0, ticks=13783/13321, in_queue=27104, util=89.79% 00:09:57.796 10:32:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:57.796 10:32:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2117732 00:09:57.796 10:32:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:57.796 10:32:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:57.796 [global] 00:09:57.796 thread=1 00:09:57.796 invalidate=1 00:09:57.796 rw=read 00:09:57.796 time_based=1 00:09:57.796 runtime=10 00:09:57.796 ioengine=libaio 00:09:57.796 direct=1 00:09:57.796 bs=4096 00:09:57.796 iodepth=1 00:09:57.796 norandommap=1 00:09:57.796 numjobs=1 00:09:57.796 00:09:57.796 [job0] 00:09:57.796 filename=/dev/nvme0n1 00:09:57.796 [job1] 00:09:57.796 filename=/dev/nvme0n2 00:09:57.796 [job2] 00:09:57.796 filename=/dev/nvme0n3 00:09:57.796 [job3] 00:09:57.796 filename=/dev/nvme0n4 00:09:57.796 Could not set queue depth (nvme0n1) 00:09:57.796 Could not set queue depth (nvme0n2) 00:09:57.796 Could not set queue depth (nvme0n3) 00:09:57.796 Could not set queue depth (nvme0n4) 00:09:57.796 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.796 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.796 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.796 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.796 fio-3.35 00:09:57.796 Starting 4 threads 00:10:01.065 10:32:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:01.065 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=70918144, buflen=4096 00:10:01.065 fio: pid=2117878, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:01.065 10:32:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:01.065 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=89235456, buflen=4096 00:10:01.065 fio: pid=2117877, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:01.065 10:32:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:01.065 10:32:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:01.065 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=24883200, buflen=4096 00:10:01.065 fio: pid=2117875, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:01.065 10:32:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:01.065 10:32:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:01.322 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=31748096, buflen=4096 00:10:01.322 fio: pid=2117876, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:01.322 10:32:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:01.322 10:32:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:01.322 00:10:01.322 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2117875: Wed Jul 24 10:32:08 2024 00:10:01.322 read: IOPS=7256, BW=28.3MiB/s (29.7MB/s)(87.7MiB/3095msec) 00:10:01.322 slat (usec): min=4, max=12027, avg= 8.46, stdev=116.87 00:10:01.322 clat (usec): min=47, max=20707, avg=127.22, stdev=197.70 00:10:01.322 lat (usec): min=54, max=20714, avg=135.68, stdev=229.39 00:10:01.322 clat percentiles (usec): 00:10:01.322 | 1.00th=[ 58], 5.00th=[ 73], 10.00th=[ 76], 20.00th=[ 80], 00:10:01.322 | 30.00th=[ 87], 40.00th=[ 115], 50.00th=[ 143], 60.00th=[ 149], 00:10:01.322 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 176], 00:10:01.322 | 99.00th=[ 198], 99.50th=[ 206], 99.90th=[ 219], 99.95th=[ 225], 00:10:01.322 | 99.99th=[ 338] 00:10:01.322 bw ( KiB/s): min=25048, max=40448, per=27.58%, avg=28416.00, stdev=6737.84, samples=5 00:10:01.322 iops : min= 6262, max=10112, avg=7104.00, stdev=1684.46, samples=5 00:10:01.322 lat (usec) : 50=0.04%, 100=35.61%, 250=64.33%, 500=0.01% 00:10:01.322 lat (msec) : 50=0.01% 00:10:01.322 cpu : usr=2.46%, sys=7.82%, ctx=22466, majf=0, minf=1 00:10:01.322 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.322 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.322 issued rwts: total=22460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.322 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.322 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2117876: Wed Jul 24 10:32:08 2024 00:10:01.322 read: IOPS=7254, BW=28.3MiB/s (29.7MB/s)(94.3MiB/3327msec) 00:10:01.322 slat (usec): min=4, max=12926, avg=10.20, stdev=162.64 00:10:01.322 clat (usec): min=45, max=20618, avg=125.39, stdev=231.52 00:10:01.322 lat (usec): min=53, max=20645, avg=135.60, stdev=282.82 00:10:01.322 clat percentiles (usec): 00:10:01.322 | 1.00th=[ 54], 5.00th=[ 58], 10.00th=[ 63], 20.00th=[ 77], 00:10:01.322 | 30.00th=[ 84], 40.00th=[ 114], 50.00th=[ 141], 60.00th=[ 147], 00:10:01.322 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 178], 00:10:01.322 | 99.00th=[ 204], 99.50th=[ 212], 99.90th=[ 223], 99.95th=[ 229], 00:10:01.322 | 99.99th=[20317] 00:10:01.322 bw ( KiB/s): min=24368, max=34256, per=26.67%, avg=27473.83, stdev=3888.53, samples=6 00:10:01.322 iops : min= 6092, max= 8564, avg=6868.33, stdev=972.03, samples=6 00:10:01.322 lat (usec) : 50=0.02%, 100=36.63%, 250=63.32%, 500=0.02% 00:10:01.322 lat (msec) : 50=0.01% 00:10:01.322 cpu : usr=2.22%, sys=8.57%, ctx=24144, majf=0, minf=1 00:10:01.322 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.322 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.322 issued rwts: total=24136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.322 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.322 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2117877: Wed Jul 24 10:32:08 2024 00:10:01.322 read: IOPS=7538, BW=29.4MiB/s (30.9MB/s)(85.1MiB/2890msec) 00:10:01.322 slat (usec): min=4, max=7876, avg= 8.59, stdev=74.44 00:10:01.322 clat (usec): min=56, max=20472, avg=121.76, stdev=142.41 00:10:01.322 lat (usec): min=69, max=20483, avg=130.35, stdev=160.88 00:10:01.322 clat percentiles (usec): 00:10:01.322 | 1.00th=[ 76], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 84], 00:10:01.323 | 30.00th=[ 88], 40.00th=[ 93], 50.00th=[ 121], 60.00th=[ 141], 00:10:01.323 | 70.00th=[ 147], 80.00th=[ 157], 90.00th=[ 165], 95.00th=[ 174], 00:10:01.323 | 99.00th=[ 198], 99.50th=[ 210], 99.90th=[ 225], 99.95th=[ 231], 00:10:01.323 | 99.99th=[ 269] 00:10:01.323 bw ( KiB/s): min=24456, max=42800, per=29.97%, avg=30876.80, stdev=8158.90, samples=5 00:10:01.323 iops : min= 6114, max=10700, avg=7719.20, stdev=2039.73, samples=5 00:10:01.323 lat (usec) : 100=43.70%, 250=56.27%, 500=0.02% 00:10:01.323 lat (msec) : 50=0.01% 00:10:01.323 cpu : usr=2.91%, sys=8.17%, ctx=21790, majf=0, minf=1 00:10:01.323 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.323 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.323 issued rwts: total=21787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.323 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.323 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2117878: Wed Jul 24 10:32:08 2024 00:10:01.323 read: IOPS=6420, BW=25.1MiB/s (26.3MB/s)(67.6MiB/2697msec) 00:10:01.323 slat (nsec): min=4652, max=38423, avg=7108.16, stdev=1353.46 00:10:01.323 clat (usec): min=59, max=553, avg=146.24, stdev=27.11 00:10:01.323 lat (usec): min=67, max=560, avg=153.34, stdev=27.18 00:10:01.323 clat percentiles (usec): 00:10:01.323 | 1.00th=[ 80], 5.00th=[ 88], 10.00th=[ 96], 20.00th=[ 131], 00:10:01.323 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 157], 00:10:01.323 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 182], 00:10:01.323 | 99.00th=[ 200], 99.50th=[ 206], 99.90th=[ 221], 99.95th=[ 225], 00:10:01.323 | 99.99th=[ 302] 00:10:01.323 bw ( KiB/s): min=24728, max=27384, per=25.03%, avg=25788.80, stdev=1180.75, samples=5 00:10:01.323 iops : min= 6182, max= 6846, avg=6447.20, stdev=295.19, samples=5 00:10:01.323 lat (usec) : 100=11.53%, 250=88.44%, 500=0.02%, 750=0.01% 00:10:01.323 cpu : usr=2.08%, sys=7.42%, ctx=17315, majf=0, minf=2 00:10:01.323 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.323 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.323 issued rwts: total=17315,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.323 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.323 00:10:01.323 Run status group 0 (all jobs): 00:10:01.323 READ: bw=101MiB/s (106MB/s), 25.1MiB/s-29.4MiB/s (26.3MB/s-30.9MB/s), io=335MiB (351MB), run=2697-3327msec 00:10:01.323 00:10:01.323 Disk stats (read/write): 00:10:01.323 nvme0n1: ios=20314/0, merge=0/0, ticks=2528/0, in_queue=2528, util=94.79% 00:10:01.323 nvme0n2: ios=21363/0, merge=0/0, ticks=2728/0, in_queue=2728, util=94.92% 00:10:01.323 nvme0n3: ios=21680/0, merge=0/0, ticks=2493/0, in_queue=2493, util=96.11% 00:10:01.323 nvme0n4: ios=16854/0, merge=0/0, ticks=2320/0, in_queue=2320, util=96.48% 00:10:01.580 10:32:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:01.580 10:32:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:01.836 10:32:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:01.836 10:32:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:02.092 10:32:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:02.092 10:32:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:02.092 10:32:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:02.092 10:32:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:02.348 10:32:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:02.348 10:32:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2117732 00:10:02.348 10:32:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:02.348 10:32:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:03.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.277 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:03.277 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:03.277 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:03.277 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.277 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:03.277 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.277 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:03.277 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:03.277 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:03.277 nvmf hotplug test: fio failed as expected 00:10:03.277 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:03.534 rmmod nvme_rdma 00:10:03.534 rmmod nvme_fabrics 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2115021 ']' 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2115021 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2115021 ']' 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2115021 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2115021 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2115021' 00:10:03.534 killing process with pid 2115021 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2115021 00:10:03.534 10:32:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2115021 00:10:03.791 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:03.791 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:03.791 00:10:03.791 real 0m23.967s 00:10:03.791 user 1m49.282s 00:10:03.791 sys 0m7.901s 00:10:03.791 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.791 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.791 ************************************ 00:10:03.791 END TEST nvmf_fio_target 00:10:03.791 ************************************ 00:10:03.791 10:32:11 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:10:03.791 10:32:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:03.791 10:32:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.791 10:32:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:03.791 ************************************ 00:10:03.791 START TEST nvmf_bdevio 00:10:03.791 ************************************ 00:10:03.791 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:10:04.049 * Looking for test storage... 00:10:04.049 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:10:04.049 10:32:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:09.304 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:09.305 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:09.305 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:09.305 Found net devices under 0000:da:00.0: mlx_0_0 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:09.305 Found net devices under 0000:da:00.1: mlx_0_1 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:09.305 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:09.305 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:10:09.305 altname enp218s0f0np0 00:10:09.305 altname ens818f0np0 00:10:09.305 inet 192.168.100.8/24 scope global mlx_0_0 00:10:09.305 valid_lft forever preferred_lft forever 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:09.305 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:09.305 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:10:09.305 altname enp218s0f1np1 00:10:09.305 altname ens818f1np1 00:10:09.305 inet 192.168.100.9/24 scope global mlx_0_1 00:10:09.305 valid_lft forever preferred_lft forever 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:09.305 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:09.306 192.168.100.9' 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:09.306 192.168.100.9' 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:09.306 192.168.100.9' 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2121885 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2121885 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2121885 ']' 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:09.306 [2024-07-24 10:32:16.334164] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:10:09.306 [2024-07-24 10:32:16.334211] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.306 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.306 [2024-07-24 10:32:16.391319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:09.306 [2024-07-24 10:32:16.433329] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.306 [2024-07-24 10:32:16.433370] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.306 [2024-07-24 10:32:16.433378] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.306 [2024-07-24 10:32:16.433383] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.306 [2024-07-24 10:32:16.433387] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.306 [2024-07-24 10:32:16.433546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:09.306 [2024-07-24 10:32:16.433656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:09.306 [2024-07-24 10:32:16.433781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:09.306 [2024-07-24 10:32:16.433783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:09.306 [2024-07-24 10:32:16.605583] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21cdf20/0x21d23f0) succeed. 00:10:09.306 [2024-07-24 10:32:16.614758] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21cf510/0x2213a80) succeed. 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:09.306 Malloc0 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.306 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:09.564 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.564 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:09.564 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.564 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:09.564 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.564 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:09.564 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.564 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:09.564 [2024-07-24 10:32:16.782435] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:09.564 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.564 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:09.564 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:09.564 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:09.564 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:09.564 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:09.564 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:09.564 { 00:10:09.564 "params": { 00:10:09.564 "name": "Nvme$subsystem", 00:10:09.564 "trtype": "$TEST_TRANSPORT", 00:10:09.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:09.564 "adrfam": "ipv4", 00:10:09.564 "trsvcid": "$NVMF_PORT", 00:10:09.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:09.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:09.564 "hdgst": ${hdgst:-false}, 00:10:09.564 "ddgst": ${ddgst:-false} 00:10:09.564 }, 00:10:09.564 "method": "bdev_nvme_attach_controller" 00:10:09.564 } 00:10:09.564 EOF 00:10:09.564 )") 00:10:09.564 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:09.564 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:09.564 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:09.564 10:32:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:09.564 "params": { 00:10:09.564 "name": "Nvme1", 00:10:09.564 "trtype": "rdma", 00:10:09.564 "traddr": "192.168.100.8", 00:10:09.564 "adrfam": "ipv4", 00:10:09.564 "trsvcid": "4420", 00:10:09.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:09.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:09.564 "hdgst": false, 00:10:09.564 "ddgst": false 00:10:09.564 }, 00:10:09.564 "method": "bdev_nvme_attach_controller" 00:10:09.564 }' 00:10:09.564 [2024-07-24 10:32:16.828057] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:10:09.564 [2024-07-24 10:32:16.828097] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2122067 ] 00:10:09.564 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.564 [2024-07-24 10:32:16.881906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:09.564 [2024-07-24 10:32:16.923531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.564 [2024-07-24 10:32:16.923630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.564 [2024-07-24 10:32:16.923630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.821 I/O targets: 00:10:09.822 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:09.822 00:10:09.822 00:10:09.822 CUnit - A unit testing framework for C - Version 2.1-3 00:10:09.822 http://cunit.sourceforge.net/ 00:10:09.822 00:10:09.822 00:10:09.822 Suite: bdevio tests on: Nvme1n1 00:10:09.822 Test: blockdev write read block ...passed 00:10:09.822 Test: blockdev write zeroes read block ...passed 00:10:09.822 Test: blockdev write zeroes read no split ...passed 00:10:09.822 Test: blockdev write zeroes read split ...passed 00:10:09.822 Test: blockdev write zeroes read split partial ...passed 00:10:09.822 Test: blockdev reset ...[2024-07-24 10:32:17.123532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:09.822 [2024-07-24 10:32:17.146344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:10:09.822 [2024-07-24 10:32:17.173073] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:09.822 passed 00:10:09.822 Test: blockdev write read 8 blocks ...passed 00:10:09.822 Test: blockdev write read size > 128k ...passed 00:10:09.822 Test: blockdev write read invalid size ...passed 00:10:09.822 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:09.822 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:09.822 Test: blockdev write read max offset ...passed 00:10:09.822 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:09.822 Test: blockdev writev readv 8 blocks ...passed 00:10:09.822 Test: blockdev writev readv 30 x 1block ...passed 00:10:09.822 Test: blockdev writev readv block ...passed 00:10:09.822 Test: blockdev writev readv size > 128k ...passed 00:10:09.822 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:09.822 Test: blockdev comparev and writev ...[2024-07-24 10:32:17.176005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:09.822 [2024-07-24 10:32:17.176035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:09.822 [2024-07-24 10:32:17.176045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:09.822 [2024-07-24 10:32:17.176052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:09.822 [2024-07-24 10:32:17.176227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:09.822 [2024-07-24 10:32:17.176235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:09.822 [2024-07-24 10:32:17.176243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:09.822 [2024-07-24 10:32:17.176250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:09.822 [2024-07-24 10:32:17.176424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:09.822 [2024-07-24 10:32:17.176435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:09.822 [2024-07-24 10:32:17.176443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:09.822 [2024-07-24 10:32:17.176449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:09.822 [2024-07-24 10:32:17.176613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:09.822 [2024-07-24 10:32:17.176621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:09.822 [2024-07-24 10:32:17.176628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:09.822 [2024-07-24 10:32:17.176635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:09.822 passed 00:10:09.822 Test: blockdev nvme passthru rw ...passed 00:10:09.822 Test: blockdev nvme passthru vendor specific ...[2024-07-24 10:32:17.176872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:09.822 [2024-07-24 10:32:17.176882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:09.822 [2024-07-24 10:32:17.176920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:09.822 [2024-07-24 10:32:17.176927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:09.822 [2024-07-24 10:32:17.176964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:09.822 [2024-07-24 10:32:17.176972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:09.822 [2024-07-24 10:32:17.177013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:09.822 [2024-07-24 10:32:17.177020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:09.822 passed 00:10:09.822 Test: blockdev nvme admin passthru ...passed 00:10:09.822 Test: blockdev copy ...passed 00:10:09.822 00:10:09.822 Run Summary: Type Total Ran Passed Failed Inactive 00:10:09.822 suites 1 1 n/a 0 0 00:10:09.822 tests 23 23 23 0 0 00:10:09.822 asserts 152 152 152 0 n/a 00:10:09.822 00:10:09.822 Elapsed time = 0.173 seconds 00:10:10.088 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.088 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.088 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.088 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.088 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:10.088 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:10.088 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:10.088 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:10.088 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:10.088 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:10.088 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:10.088 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:10.088 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:10.088 rmmod nvme_rdma 00:10:10.088 rmmod nvme_fabrics 00:10:10.088 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:10.088 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:10.089 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:10.089 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2121885 ']' 00:10:10.089 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2121885 00:10:10.089 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2121885 ']' 00:10:10.089 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2121885 00:10:10.089 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:10.089 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:10.089 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2121885 00:10:10.089 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:10.089 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:10.089 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2121885' 00:10:10.089 killing process with pid 2121885 00:10:10.089 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2121885 00:10:10.089 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2121885 00:10:10.393 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:10.394 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:10.394 00:10:10.394 real 0m6.521s 00:10:10.394 user 0m7.348s 00:10:10.394 sys 0m4.336s 00:10:10.394 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.394 10:32:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.394 ************************************ 00:10:10.394 END TEST nvmf_bdevio 00:10:10.394 ************************************ 00:10:10.394 10:32:17 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:10.394 00:10:10.394 real 3m43.560s 00:10:10.394 user 10m11.472s 00:10:10.394 sys 1m15.793s 00:10:10.394 10:32:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.394 10:32:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.394 ************************************ 00:10:10.394 END TEST nvmf_target_core 00:10:10.394 ************************************ 00:10:10.394 10:32:17 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:10:10.394 10:32:17 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:10.394 10:32:17 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.394 10:32:17 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:10.394 ************************************ 00:10:10.394 START TEST nvmf_target_extra 00:10:10.394 ************************************ 00:10:10.394 10:32:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:10:10.651 * Looking for test storage... 00:10:10.651 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:10:10.651 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.651 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:10.651 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.651 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.651 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.651 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.651 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.651 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.651 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.651 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.651 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.651 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.651 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:10.651 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:10.651 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.651 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.651 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.651 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.651 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:10.651 10:32:17 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.651 10:32:17 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.651 10:32:17 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.652 10:32:17 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.652 10:32:17 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.652 10:32:17 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.652 10:32:17 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:10.652 10:32:17 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.652 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:10:10.652 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:10.652 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:10.652 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.652 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.652 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.652 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:10.652 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:10.652 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:10.652 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:10.652 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:10.652 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:10.652 10:32:17 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:10:10.652 10:32:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:10.652 10:32:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.652 10:32:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:10.652 ************************************ 00:10:10.652 START TEST nvmf_example 00:10:10.652 ************************************ 00:10:10.652 10:32:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:10:10.652 * Looking for test storage... 00:10:10.652 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:10.652 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.652 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:10.652 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.652 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.652 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.652 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.652 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.652 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.652 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.652 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.652 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.652 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.652 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:10.652 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:10.652 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.652 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.652 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.652 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.652 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:10:10.910 10:32:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:16.169 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:16.169 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:16.169 Found net devices under 0000:da:00.0: mlx_0_0 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:16.169 Found net devices under 0000:da:00.1: mlx_0_1 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:16.169 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # uname 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:16.170 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:16.170 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:10:16.170 altname enp218s0f0np0 00:10:16.170 altname ens818f0np0 00:10:16.170 inet 192.168.100.8/24 scope global mlx_0_0 00:10:16.170 valid_lft forever preferred_lft forever 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:16.170 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:16.170 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:10:16.170 altname enp218s0f1np1 00:10:16.170 altname ens818f1np1 00:10:16.170 inet 192.168.100.9/24 scope global mlx_0_1 00:10:16.170 valid_lft forever preferred_lft forever 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:16.170 192.168.100.9' 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:16.170 192.168.100.9' 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:16.170 192.168.100.9' 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:16.170 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:16.171 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:16.171 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:16.171 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:16.429 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:10:16.429 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2125335 00:10:16.429 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:16.429 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:16.429 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2125335 00:10:16.429 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2125335 ']' 00:10:16.429 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.429 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.429 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.429 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.429 10:32:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:16.429 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:17.362 10:32:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:17.362 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.574 Initializing NVMe Controllers 00:10:29.574 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:29.574 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:29.574 Initialization complete. Launching workers. 00:10:29.574 ======================================================== 00:10:29.574 Latency(us) 00:10:29.574 Device Information : IOPS MiB/s Average min max 00:10:29.574 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 25716.42 100.45 2488.48 642.17 15964.85 00:10:29.574 ======================================================== 00:10:29.574 Total : 25716.42 100.45 2488.48 642.17 15964.85 00:10:29.574 00:10:29.574 10:32:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:29.574 10:32:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:29.574 10:32:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:29.574 10:32:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:10:29.574 10:32:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:29.574 10:32:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:29.574 10:32:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:10:29.574 10:32:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:29.574 10:32:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:29.574 rmmod nvme_rdma 00:10:29.574 rmmod nvme_fabrics 00:10:29.574 10:32:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:29.574 10:32:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:10:29.574 10:32:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:10:29.574 10:32:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2125335 ']' 00:10:29.574 10:32:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2125335 00:10:29.574 10:32:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2125335 ']' 00:10:29.574 10:32:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2125335 00:10:29.574 10:32:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:29.574 10:32:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.574 10:32:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2125335 00:10:29.574 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:29.574 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:29.574 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2125335' 00:10:29.574 killing process with pid 2125335 00:10:29.574 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2125335 00:10:29.574 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2125335 00:10:29.574 nvmf threads initialize successfully 00:10:29.574 bdev subsystem init successfully 00:10:29.574 created a nvmf target service 00:10:29.574 create targets's poll groups done 00:10:29.574 all subsystems of target started 00:10:29.574 nvmf target is running 00:10:29.574 all subsystems of target stopped 00:10:29.574 destroy targets's poll groups done 00:10:29.574 destroyed the nvmf target service 00:10:29.574 bdev subsystem finish successfully 00:10:29.574 nvmf threads destroy successfully 00:10:29.574 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:29.574 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:29.574 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:29.574 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:29.574 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:29.574 00:10:29.574 real 0m18.293s 00:10:29.574 user 0m51.557s 00:10:29.574 sys 0m4.522s 00:10:29.574 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:29.574 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:29.574 ************************************ 00:10:29.574 END TEST nvmf_example 00:10:29.575 ************************************ 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:29.575 ************************************ 00:10:29.575 START TEST nvmf_filesystem 00:10:29.575 ************************************ 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:10:29.575 * Looking for test storage... 00:10:29.575 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:10:29.575 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:10:29.576 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:29.576 #define SPDK_CONFIG_H 00:10:29.576 #define SPDK_CONFIG_APPS 1 00:10:29.576 #define SPDK_CONFIG_ARCH native 00:10:29.576 #undef SPDK_CONFIG_ASAN 00:10:29.576 #undef SPDK_CONFIG_AVAHI 00:10:29.576 #undef SPDK_CONFIG_CET 00:10:29.576 #define SPDK_CONFIG_COVERAGE 1 00:10:29.576 #define SPDK_CONFIG_CROSS_PREFIX 00:10:29.576 #undef SPDK_CONFIG_CRYPTO 00:10:29.576 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:29.576 #undef SPDK_CONFIG_CUSTOMOCF 00:10:29.576 #undef SPDK_CONFIG_DAOS 00:10:29.576 #define SPDK_CONFIG_DAOS_DIR 00:10:29.576 #define SPDK_CONFIG_DEBUG 1 00:10:29.576 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:29.576 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:10:29.576 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:10:29.576 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:10:29.576 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:29.576 #undef SPDK_CONFIG_DPDK_UADK 00:10:29.576 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:10:29.576 #define SPDK_CONFIG_EXAMPLES 1 00:10:29.576 #undef SPDK_CONFIG_FC 00:10:29.576 #define SPDK_CONFIG_FC_PATH 00:10:29.576 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:29.576 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:29.576 #undef SPDK_CONFIG_FUSE 00:10:29.576 #undef SPDK_CONFIG_FUZZER 00:10:29.576 #define SPDK_CONFIG_FUZZER_LIB 00:10:29.576 #undef SPDK_CONFIG_GOLANG 00:10:29.576 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:29.576 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:29.576 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:29.576 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:29.576 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:29.576 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:29.576 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:29.576 #define SPDK_CONFIG_IDXD 1 00:10:29.576 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:29.576 #undef SPDK_CONFIG_IPSEC_MB 00:10:29.576 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:29.576 #define SPDK_CONFIG_ISAL 1 00:10:29.576 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:29.576 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:29.576 #define SPDK_CONFIG_LIBDIR 00:10:29.576 #undef SPDK_CONFIG_LTO 00:10:29.576 #define SPDK_CONFIG_MAX_LCORES 128 00:10:29.576 #define SPDK_CONFIG_NVME_CUSE 1 00:10:29.576 #undef SPDK_CONFIG_OCF 00:10:29.576 #define SPDK_CONFIG_OCF_PATH 00:10:29.576 #define SPDK_CONFIG_OPENSSL_PATH 00:10:29.576 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:29.576 #define SPDK_CONFIG_PGO_DIR 00:10:29.576 #undef SPDK_CONFIG_PGO_USE 00:10:29.576 #define SPDK_CONFIG_PREFIX /usr/local 00:10:29.576 #undef SPDK_CONFIG_RAID5F 00:10:29.576 #undef SPDK_CONFIG_RBD 00:10:29.576 #define SPDK_CONFIG_RDMA 1 00:10:29.576 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:29.576 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:29.576 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:29.576 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:29.576 #define SPDK_CONFIG_SHARED 1 00:10:29.577 #undef SPDK_CONFIG_SMA 00:10:29.577 #define SPDK_CONFIG_TESTS 1 00:10:29.577 #undef SPDK_CONFIG_TSAN 00:10:29.577 #define SPDK_CONFIG_UBLK 1 00:10:29.577 #define SPDK_CONFIG_UBSAN 1 00:10:29.577 #undef SPDK_CONFIG_UNIT_TESTS 00:10:29.577 #undef SPDK_CONFIG_URING 00:10:29.577 #define SPDK_CONFIG_URING_PATH 00:10:29.577 #undef SPDK_CONFIG_URING_ZNS 00:10:29.577 #undef SPDK_CONFIG_USDT 00:10:29.577 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:29.577 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:29.577 #undef SPDK_CONFIG_VFIO_USER 00:10:29.577 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:29.577 #define SPDK_CONFIG_VHOST 1 00:10:29.577 #define SPDK_CONFIG_VIRTIO 1 00:10:29.577 #undef SPDK_CONFIG_VTUNE 00:10:29.577 #define SPDK_CONFIG_VTUNE_DIR 00:10:29.577 #define SPDK_CONFIG_WERROR 1 00:10:29.577 #define SPDK_CONFIG_WPDK_DIR 00:10:29.577 #undef SPDK_CONFIG_XNVME 00:10:29.577 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:29.577 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v22.11.4 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:29.578 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:29.579 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j96 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=rdma 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 2127616 ]] 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 2127616 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.hKVssy 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.hKVssy/tests/target /tmp/spdk.hKVssy 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:10:29.580 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=1050284032 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4234145792 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=183816269824 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=195974307840 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=12158038016 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=97921597440 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=97987153920 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=65556480 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=39171825664 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=39194861568 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=23035904 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=97984700416 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=97987153920 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=2453504 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=19597426688 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=19597430784 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:10:29.581 * Looking for test storage... 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=183816269824 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=14372630528 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:29.581 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.581 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:29.582 10:32:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:34.850 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:34.851 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:34.851 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:34.851 Found net devices under 0000:da:00.0: mlx_0_0 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:34.851 Found net devices under 0000:da:00.1: mlx_0_1 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:34.851 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:34.851 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:10:34.851 altname enp218s0f0np0 00:10:34.851 altname ens818f0np0 00:10:34.851 inet 192.168.100.8/24 scope global mlx_0_0 00:10:34.851 valid_lft forever preferred_lft forever 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:34.851 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:34.851 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:10:34.851 altname enp218s0f1np1 00:10:34.851 altname ens818f1np1 00:10:34.851 inet 192.168.100.9/24 scope global mlx_0_1 00:10:34.851 valid_lft forever preferred_lft forever 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:34.851 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:34.852 192.168.100.9' 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:34.852 192.168.100.9' 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:34.852 192.168.100.9' 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:34.852 ************************************ 00:10:34.852 START TEST nvmf_filesystem_no_in_capsule 00:10:34.852 ************************************ 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2130504 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2130504 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2130504 ']' 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.852 [2024-07-24 10:32:41.727383] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:10:34.852 [2024-07-24 10:32:41.727419] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.852 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.852 [2024-07-24 10:32:41.782131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:34.852 [2024-07-24 10:32:41.824705] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:34.852 [2024-07-24 10:32:41.824745] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:34.852 [2024-07-24 10:32:41.824751] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:34.852 [2024-07-24 10:32:41.824757] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:34.852 [2024-07-24 10:32:41.824762] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:34.852 [2024-07-24 10:32:41.824810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.852 [2024-07-24 10:32:41.824910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:34.852 [2024-07-24 10:32:41.824998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:34.852 [2024-07-24 10:32:41.824999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.852 10:32:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.852 [2024-07-24 10:32:41.968001] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:10:34.852 [2024-07-24 10:32:41.987967] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13f36a0/0x13f7b70) succeed. 00:10:34.852 [2024-07-24 10:32:41.997111] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13f4c90/0x1439200) succeed. 00:10:34.852 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.852 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:34.852 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.852 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.852 Malloc1 00:10:34.852 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.852 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:34.852 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.852 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.852 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.852 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:34.853 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.853 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.853 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.853 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:34.853 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.853 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.853 [2024-07-24 10:32:42.231312] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:34.853 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.853 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:34.853 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:34.853 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:34.853 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:34.853 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:34.853 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:34.853 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.853 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.853 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.853 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:34.853 { 00:10:34.853 "name": "Malloc1", 00:10:34.853 "aliases": [ 00:10:34.853 "ffe4510a-90d3-4be9-be03-433d278a7c58" 00:10:34.853 ], 00:10:34.853 "product_name": "Malloc disk", 00:10:34.853 "block_size": 512, 00:10:34.853 "num_blocks": 1048576, 00:10:34.853 "uuid": "ffe4510a-90d3-4be9-be03-433d278a7c58", 00:10:34.853 "assigned_rate_limits": { 00:10:34.853 "rw_ios_per_sec": 0, 00:10:34.853 "rw_mbytes_per_sec": 0, 00:10:34.853 "r_mbytes_per_sec": 0, 00:10:34.853 "w_mbytes_per_sec": 0 00:10:34.853 }, 00:10:34.853 "claimed": true, 00:10:34.853 "claim_type": "exclusive_write", 00:10:34.853 "zoned": false, 00:10:34.853 "supported_io_types": { 00:10:34.853 "read": true, 00:10:34.853 "write": true, 00:10:34.853 "unmap": true, 00:10:34.853 "flush": true, 00:10:34.853 "reset": true, 00:10:34.853 "nvme_admin": false, 00:10:34.853 "nvme_io": false, 00:10:34.853 "nvme_io_md": false, 00:10:34.853 "write_zeroes": true, 00:10:34.853 "zcopy": true, 00:10:34.853 "get_zone_info": false, 00:10:34.853 "zone_management": false, 00:10:34.853 "zone_append": false, 00:10:34.853 "compare": false, 00:10:34.853 "compare_and_write": false, 00:10:34.853 "abort": true, 00:10:34.853 "seek_hole": false, 00:10:34.853 "seek_data": false, 00:10:34.853 "copy": true, 00:10:34.853 "nvme_iov_md": false 00:10:34.853 }, 00:10:34.853 "memory_domains": [ 00:10:34.853 { 00:10:34.853 "dma_device_id": "system", 00:10:34.853 "dma_device_type": 1 00:10:34.853 }, 00:10:34.853 { 00:10:34.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.853 "dma_device_type": 2 00:10:34.853 } 00:10:34.853 ], 00:10:34.853 "driver_specific": {} 00:10:34.853 } 00:10:34.853 ]' 00:10:34.853 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:34.853 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:35.111 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:35.111 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:35.111 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:35.111 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:35.111 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:35.111 10:32:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:36.048 10:32:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:36.048 10:32:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:36.048 10:32:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:36.048 10:32:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:36.048 10:32:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:37.945 10:32:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:37.945 10:32:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:37.945 10:32:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:37.945 10:32:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:37.945 10:32:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:37.945 10:32:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:37.945 10:32:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:37.945 10:32:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:37.945 10:32:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:37.945 10:32:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:37.945 10:32:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:37.945 10:32:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:37.945 10:32:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:37.945 10:32:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:37.945 10:32:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:37.945 10:32:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:37.945 10:32:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:37.945 10:32:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:38.202 10:32:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.280 ************************************ 00:10:39.280 START TEST filesystem_ext4 00:10:39.280 ************************************ 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:39.280 mke2fs 1.46.5 (30-Dec-2021) 00:10:39.280 Discarding device blocks: 0/522240 done 00:10:39.280 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:39.280 Filesystem UUID: 672fceb2-01d3-4f6f-ba4e-7986223f4daf 00:10:39.280 Superblock backups stored on blocks: 00:10:39.280 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:39.280 00:10:39.280 Allocating group tables: 0/64 done 00:10:39.280 Writing inode tables: 0/64 done 00:10:39.280 Creating journal (8192 blocks): done 00:10:39.280 Writing superblocks and filesystem accounting information: 0/64 done 00:10:39.280 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2130504 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:39.280 00:10:39.280 real 0m0.177s 00:10:39.280 user 0m0.019s 00:10:39.280 sys 0m0.070s 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:39.280 ************************************ 00:10:39.280 END TEST filesystem_ext4 00:10:39.280 ************************************ 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.280 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.539 ************************************ 00:10:39.539 START TEST filesystem_btrfs 00:10:39.539 ************************************ 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:39.539 btrfs-progs v6.6.2 00:10:39.539 See https://btrfs.readthedocs.io for more information. 00:10:39.539 00:10:39.539 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:39.539 NOTE: several default settings have changed in version 5.15, please make sure 00:10:39.539 this does not affect your deployments: 00:10:39.539 - DUP for metadata (-m dup) 00:10:39.539 - enabled no-holes (-O no-holes) 00:10:39.539 - enabled free-space-tree (-R free-space-tree) 00:10:39.539 00:10:39.539 Label: (null) 00:10:39.539 UUID: 9390490f-5554-450b-bb85-274203ad4ffb 00:10:39.539 Node size: 16384 00:10:39.539 Sector size: 4096 00:10:39.539 Filesystem size: 510.00MiB 00:10:39.539 Block group profiles: 00:10:39.539 Data: single 8.00MiB 00:10:39.539 Metadata: DUP 32.00MiB 00:10:39.539 System: DUP 8.00MiB 00:10:39.539 SSD detected: yes 00:10:39.539 Zoned device: no 00:10:39.539 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:39.539 Runtime features: free-space-tree 00:10:39.539 Checksum: crc32c 00:10:39.539 Number of devices: 1 00:10:39.539 Devices: 00:10:39.539 ID SIZE PATH 00:10:39.539 1 510.00MiB /dev/nvme0n1p1 00:10:39.539 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2130504 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:39.539 00:10:39.539 real 0m0.244s 00:10:39.539 user 0m0.038s 00:10:39.539 sys 0m0.112s 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.539 10:32:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:39.539 ************************************ 00:10:39.539 END TEST filesystem_btrfs 00:10:39.539 ************************************ 00:10:39.797 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:39.797 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:39.797 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.797 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.797 ************************************ 00:10:39.797 START TEST filesystem_xfs 00:10:39.797 ************************************ 00:10:39.797 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:39.798 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:39.798 = sectsz=512 attr=2, projid32bit=1 00:10:39.798 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:39.798 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:39.798 data = bsize=4096 blocks=130560, imaxpct=25 00:10:39.798 = sunit=0 swidth=0 blks 00:10:39.798 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:39.798 log =internal log bsize=4096 blocks=16384, version=2 00:10:39.798 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:39.798 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:39.798 Discarding blocks...Done. 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2130504 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:39.798 00:10:39.798 real 0m0.188s 00:10:39.798 user 0m0.019s 00:10:39.798 sys 0m0.070s 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.798 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:39.798 ************************************ 00:10:39.798 END TEST filesystem_xfs 00:10:39.798 ************************************ 00:10:40.056 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:40.056 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:40.056 10:32:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:40.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2130504 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2130504 ']' 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2130504 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2130504 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2130504' 00:10:40.990 killing process with pid 2130504 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 2130504 00:10:40.990 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 2130504 00:10:41.249 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:41.249 00:10:41.249 real 0m7.008s 00:10:41.249 user 0m27.329s 00:10:41.249 sys 0m1.006s 00:10:41.249 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:41.249 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.249 ************************************ 00:10:41.249 END TEST nvmf_filesystem_no_in_capsule 00:10:41.249 ************************************ 00:10:41.507 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:41.507 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:41.507 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:41.507 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:41.507 ************************************ 00:10:41.507 START TEST nvmf_filesystem_in_capsule 00:10:41.507 ************************************ 00:10:41.507 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:10:41.507 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:41.507 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:41.507 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:41.507 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:41.507 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.507 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:41.507 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2131929 00:10:41.507 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2131929 00:10:41.507 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2131929 ']' 00:10:41.507 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.507 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:41.507 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.507 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:41.507 10:32:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.507 [2024-07-24 10:32:48.812979] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:10:41.507 [2024-07-24 10:32:48.813018] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.507 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.507 [2024-07-24 10:32:48.864397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.507 [2024-07-24 10:32:48.906909] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.507 [2024-07-24 10:32:48.906951] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.507 [2024-07-24 10:32:48.906957] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.507 [2024-07-24 10:32:48.906963] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.507 [2024-07-24 10:32:48.906969] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.507 [2024-07-24 10:32:48.910509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.507 [2024-07-24 10:32:48.910527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.507 [2024-07-24 10:32:48.910614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.507 [2024-07-24 10:32:48.910616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.765 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:41.765 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:41.766 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:41.766 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:41.766 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.766 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.766 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:41.766 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:10:41.766 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.766 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.766 [2024-07-24 10:32:49.072334] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14df6a0/0x14e3b70) succeed. 00:10:41.766 [2024-07-24 10:32:49.081495] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14e0c90/0x1525200) succeed. 00:10:41.766 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.766 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:41.766 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.766 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.024 Malloc1 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.024 [2024-07-24 10:32:49.341174] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:42.024 { 00:10:42.024 "name": "Malloc1", 00:10:42.024 "aliases": [ 00:10:42.024 "c2c1bd96-073d-4cd2-8d5e-b1a45779547c" 00:10:42.024 ], 00:10:42.024 "product_name": "Malloc disk", 00:10:42.024 "block_size": 512, 00:10:42.024 "num_blocks": 1048576, 00:10:42.024 "uuid": "c2c1bd96-073d-4cd2-8d5e-b1a45779547c", 00:10:42.024 "assigned_rate_limits": { 00:10:42.024 "rw_ios_per_sec": 0, 00:10:42.024 "rw_mbytes_per_sec": 0, 00:10:42.024 "r_mbytes_per_sec": 0, 00:10:42.024 "w_mbytes_per_sec": 0 00:10:42.024 }, 00:10:42.024 "claimed": true, 00:10:42.024 "claim_type": "exclusive_write", 00:10:42.024 "zoned": false, 00:10:42.024 "supported_io_types": { 00:10:42.024 "read": true, 00:10:42.024 "write": true, 00:10:42.024 "unmap": true, 00:10:42.024 "flush": true, 00:10:42.024 "reset": true, 00:10:42.024 "nvme_admin": false, 00:10:42.024 "nvme_io": false, 00:10:42.024 "nvme_io_md": false, 00:10:42.024 "write_zeroes": true, 00:10:42.024 "zcopy": true, 00:10:42.024 "get_zone_info": false, 00:10:42.024 "zone_management": false, 00:10:42.024 "zone_append": false, 00:10:42.024 "compare": false, 00:10:42.024 "compare_and_write": false, 00:10:42.024 "abort": true, 00:10:42.024 "seek_hole": false, 00:10:42.024 "seek_data": false, 00:10:42.024 "copy": true, 00:10:42.024 "nvme_iov_md": false 00:10:42.024 }, 00:10:42.024 "memory_domains": [ 00:10:42.024 { 00:10:42.024 "dma_device_id": "system", 00:10:42.024 "dma_device_type": 1 00:10:42.024 }, 00:10:42.024 { 00:10:42.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.024 "dma_device_type": 2 00:10:42.024 } 00:10:42.024 ], 00:10:42.024 "driver_specific": {} 00:10:42.024 } 00:10:42.024 ]' 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:42.024 10:32:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:42.957 10:32:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:43.214 10:32:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:43.215 10:32:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:43.215 10:32:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:43.215 10:32:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:45.114 10:32:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:45.114 10:32:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:45.114 10:32:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:45.114 10:32:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:45.114 10:32:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:45.114 10:32:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:45.114 10:32:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:45.114 10:32:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:45.114 10:32:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:45.114 10:32:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:45.114 10:32:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:45.114 10:32:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:45.114 10:32:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:45.114 10:32:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:45.114 10:32:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:45.114 10:32:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:45.114 10:32:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:45.114 10:32:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:45.372 10:32:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:46.307 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:46.307 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:46.307 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:46.307 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.307 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.307 ************************************ 00:10:46.307 START TEST filesystem_in_capsule_ext4 00:10:46.307 ************************************ 00:10:46.307 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:46.307 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:46.307 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:46.307 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:46.307 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:46.307 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:46.307 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:46.307 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:46.307 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:46.307 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:46.307 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:46.307 mke2fs 1.46.5 (30-Dec-2021) 00:10:46.307 Discarding device blocks: 0/522240 done 00:10:46.307 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:46.307 Filesystem UUID: 22694aa0-ab76-4c32-86fc-349d6b2fc53c 00:10:46.307 Superblock backups stored on blocks: 00:10:46.307 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:46.307 00:10:46.307 Allocating group tables: 0/64 done 00:10:46.307 Writing inode tables: 0/64 done 00:10:46.307 Creating journal (8192 blocks): done 00:10:46.308 Writing superblocks and filesystem accounting information: 0/64 done 00:10:46.308 00:10:46.308 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:46.308 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:46.308 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:46.308 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:46.308 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:46.308 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:46.308 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:46.308 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:46.568 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2131929 00:10:46.568 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:46.568 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:46.568 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:46.568 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:46.568 00:10:46.568 real 0m0.174s 00:10:46.568 user 0m0.027s 00:10:46.568 sys 0m0.061s 00:10:46.568 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.568 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:46.568 ************************************ 00:10:46.568 END TEST filesystem_in_capsule_ext4 00:10:46.568 ************************************ 00:10:46.568 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:46.568 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:46.568 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.568 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.568 ************************************ 00:10:46.568 START TEST filesystem_in_capsule_btrfs 00:10:46.568 ************************************ 00:10:46.568 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:46.568 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:46.568 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:46.568 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:46.568 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:46.568 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:46.568 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:46.568 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:46.568 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:46.569 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:46.569 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:46.569 btrfs-progs v6.6.2 00:10:46.569 See https://btrfs.readthedocs.io for more information. 00:10:46.569 00:10:46.569 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:46.569 NOTE: several default settings have changed in version 5.15, please make sure 00:10:46.569 this does not affect your deployments: 00:10:46.569 - DUP for metadata (-m dup) 00:10:46.569 - enabled no-holes (-O no-holes) 00:10:46.569 - enabled free-space-tree (-R free-space-tree) 00:10:46.569 00:10:46.569 Label: (null) 00:10:46.569 UUID: 65516f26-08ce-4e1e-95fd-3f675f1f8946 00:10:46.569 Node size: 16384 00:10:46.569 Sector size: 4096 00:10:46.569 Filesystem size: 510.00MiB 00:10:46.569 Block group profiles: 00:10:46.569 Data: single 8.00MiB 00:10:46.569 Metadata: DUP 32.00MiB 00:10:46.569 System: DUP 8.00MiB 00:10:46.569 SSD detected: yes 00:10:46.569 Zoned device: no 00:10:46.569 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:46.569 Runtime features: free-space-tree 00:10:46.569 Checksum: crc32c 00:10:46.569 Number of devices: 1 00:10:46.569 Devices: 00:10:46.569 ID SIZE PATH 00:10:46.569 1 510.00MiB /dev/nvme0n1p1 00:10:46.569 00:10:46.569 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:46.569 10:32:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2131929 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:46.827 00:10:46.827 real 0m0.244s 00:10:46.827 user 0m0.022s 00:10:46.827 sys 0m0.125s 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:46.827 ************************************ 00:10:46.827 END TEST filesystem_in_capsule_btrfs 00:10:46.827 ************************************ 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.827 ************************************ 00:10:46.827 START TEST filesystem_in_capsule_xfs 00:10:46.827 ************************************ 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:46.827 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:10:46.828 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:46.828 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:46.828 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:46.828 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:46.828 = sectsz=512 attr=2, projid32bit=1 00:10:46.828 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:46.828 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:46.828 data = bsize=4096 blocks=130560, imaxpct=25 00:10:46.828 = sunit=0 swidth=0 blks 00:10:46.828 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:46.828 log =internal log bsize=4096 blocks=16384, version=2 00:10:46.828 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:46.828 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:46.828 Discarding blocks...Done. 00:10:46.828 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:46.828 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:47.086 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:47.086 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:47.086 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:47.086 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:47.086 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:47.086 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:47.086 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2131929 00:10:47.086 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:47.086 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:47.086 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:47.086 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:47.086 00:10:47.086 real 0m0.185s 00:10:47.086 user 0m0.017s 00:10:47.086 sys 0m0.069s 00:10:47.086 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:47.086 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:47.086 ************************************ 00:10:47.086 END TEST filesystem_in_capsule_xfs 00:10:47.086 ************************************ 00:10:47.086 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:47.086 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:47.086 10:32:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:48.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2131929 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2131929 ']' 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2131929 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2131929 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2131929' 00:10:48.020 killing process with pid 2131929 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 2131929 00:10:48.020 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 2131929 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:48.587 00:10:48.587 real 0m7.057s 00:10:48.587 user 0m27.492s 00:10:48.587 sys 0m0.991s 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.587 ************************************ 00:10:48.587 END TEST nvmf_filesystem_in_capsule 00:10:48.587 ************************************ 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:48.587 rmmod nvme_rdma 00:10:48.587 rmmod nvme_fabrics 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:48.587 00:10:48.587 real 0m19.563s 00:10:48.587 user 0m56.381s 00:10:48.587 sys 0m6.021s 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.587 ************************************ 00:10:48.587 END TEST nvmf_filesystem 00:10:48.587 ************************************ 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:48.587 ************************************ 00:10:48.587 START TEST nvmf_target_discovery 00:10:48.587 ************************************ 00:10:48.587 10:32:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:10:48.847 * Looking for test storage... 00:10:48.847 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:10:48.847 10:32:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.112 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:54.112 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:10:54.112 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:54.112 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:54.112 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:54.112 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:54.112 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:54.112 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:10:54.112 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:54.112 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:10:54.112 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:10:54.112 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:10:54.112 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:54.113 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:54.113 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:54.113 Found net devices under 0000:da:00.0: mlx_0_0 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:54.113 Found net devices under 0000:da:00.1: mlx_0_1 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:54.113 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:54.114 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:54.114 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:10:54.114 altname enp218s0f0np0 00:10:54.114 altname ens818f0np0 00:10:54.114 inet 192.168.100.8/24 scope global mlx_0_0 00:10:54.114 valid_lft forever preferred_lft forever 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:54.114 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:54.114 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:10:54.114 altname enp218s0f1np1 00:10:54.114 altname ens818f1np1 00:10:54.114 inet 192.168.100.9/24 scope global mlx_0_1 00:10:54.114 valid_lft forever preferred_lft forever 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:54.114 192.168.100.9' 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:54.114 192.168.100.9' 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:54.114 192.168.100.9' 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2136423 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2136423 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 2136423 ']' 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.114 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:54.114 [2024-07-24 10:33:01.371424] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:10:54.114 [2024-07-24 10:33:01.371466] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.114 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.114 [2024-07-24 10:33:01.426811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.114 [2024-07-24 10:33:01.469204] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.114 [2024-07-24 10:33:01.469244] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.114 [2024-07-24 10:33:01.469251] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.114 [2024-07-24 10:33:01.469256] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.114 [2024-07-24 10:33:01.469262] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.114 [2024-07-24 10:33:01.469327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.114 [2024-07-24 10:33:01.469445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.114 [2024-07-24 10:33:01.469468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.114 [2024-07-24 10:33:01.469469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.373 [2024-07-24 10:33:01.640031] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe6c6a0/0xe70b70) succeed. 00:10:54.373 [2024-07-24 10:33:01.649143] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe6dc90/0xeb2200) succeed. 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.373 Null1 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.373 [2024-07-24 10:33:01.810213] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.373 Null2 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.373 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.631 Null3 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.631 Null4 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.631 10:33:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:10:54.631 00:10:54.631 Discovery Log Number of Records 6, Generation counter 6 00:10:54.631 =====Discovery Log Entry 0====== 00:10:54.631 trtype: rdma 00:10:54.631 adrfam: ipv4 00:10:54.631 subtype: current discovery subsystem 00:10:54.631 treq: not required 00:10:54.631 portid: 0 00:10:54.631 trsvcid: 4420 00:10:54.631 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:54.631 traddr: 192.168.100.8 00:10:54.632 eflags: explicit discovery connections, duplicate discovery information 00:10:54.632 rdma_prtype: not specified 00:10:54.632 rdma_qptype: connected 00:10:54.632 rdma_cms: rdma-cm 00:10:54.632 rdma_pkey: 0x0000 00:10:54.632 =====Discovery Log Entry 1====== 00:10:54.632 trtype: rdma 00:10:54.632 adrfam: ipv4 00:10:54.632 subtype: nvme subsystem 00:10:54.632 treq: not required 00:10:54.632 portid: 0 00:10:54.632 trsvcid: 4420 00:10:54.632 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:54.632 traddr: 192.168.100.8 00:10:54.632 eflags: none 00:10:54.632 rdma_prtype: not specified 00:10:54.632 rdma_qptype: connected 00:10:54.632 rdma_cms: rdma-cm 00:10:54.632 rdma_pkey: 0x0000 00:10:54.632 =====Discovery Log Entry 2====== 00:10:54.632 trtype: rdma 00:10:54.632 adrfam: ipv4 00:10:54.632 subtype: nvme subsystem 00:10:54.632 treq: not required 00:10:54.632 portid: 0 00:10:54.632 trsvcid: 4420 00:10:54.632 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:54.632 traddr: 192.168.100.8 00:10:54.632 eflags: none 00:10:54.632 rdma_prtype: not specified 00:10:54.632 rdma_qptype: connected 00:10:54.632 rdma_cms: rdma-cm 00:10:54.632 rdma_pkey: 0x0000 00:10:54.632 =====Discovery Log Entry 3====== 00:10:54.632 trtype: rdma 00:10:54.632 adrfam: ipv4 00:10:54.632 subtype: nvme subsystem 00:10:54.632 treq: not required 00:10:54.632 portid: 0 00:10:54.632 trsvcid: 4420 00:10:54.632 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:54.632 traddr: 192.168.100.8 00:10:54.632 eflags: none 00:10:54.632 rdma_prtype: not specified 00:10:54.632 rdma_qptype: connected 00:10:54.632 rdma_cms: rdma-cm 00:10:54.632 rdma_pkey: 0x0000 00:10:54.632 =====Discovery Log Entry 4====== 00:10:54.632 trtype: rdma 00:10:54.632 adrfam: ipv4 00:10:54.632 subtype: nvme subsystem 00:10:54.632 treq: not required 00:10:54.632 portid: 0 00:10:54.632 trsvcid: 4420 00:10:54.632 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:54.632 traddr: 192.168.100.8 00:10:54.632 eflags: none 00:10:54.632 rdma_prtype: not specified 00:10:54.632 rdma_qptype: connected 00:10:54.632 rdma_cms: rdma-cm 00:10:54.632 rdma_pkey: 0x0000 00:10:54.632 =====Discovery Log Entry 5====== 00:10:54.632 trtype: rdma 00:10:54.632 adrfam: ipv4 00:10:54.632 subtype: discovery subsystem referral 00:10:54.632 treq: not required 00:10:54.632 portid: 0 00:10:54.632 trsvcid: 4430 00:10:54.632 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:54.632 traddr: 192.168.100.8 00:10:54.632 eflags: none 00:10:54.632 rdma_prtype: unrecognized 00:10:54.632 rdma_qptype: unrecognized 00:10:54.632 rdma_cms: unrecognized 00:10:54.632 rdma_pkey: 0x0000 00:10:54.632 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:54.632 Perform nvmf subsystem discovery via RPC 00:10:54.632 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:54.632 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.632 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.632 [ 00:10:54.632 { 00:10:54.632 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:54.632 "subtype": "Discovery", 00:10:54.632 "listen_addresses": [ 00:10:54.632 { 00:10:54.632 "trtype": "RDMA", 00:10:54.632 "adrfam": "IPv4", 00:10:54.632 "traddr": "192.168.100.8", 00:10:54.632 "trsvcid": "4420" 00:10:54.632 } 00:10:54.632 ], 00:10:54.632 "allow_any_host": true, 00:10:54.632 "hosts": [] 00:10:54.632 }, 00:10:54.632 { 00:10:54.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:54.632 "subtype": "NVMe", 00:10:54.632 "listen_addresses": [ 00:10:54.632 { 00:10:54.632 "trtype": "RDMA", 00:10:54.632 "adrfam": "IPv4", 00:10:54.632 "traddr": "192.168.100.8", 00:10:54.632 "trsvcid": "4420" 00:10:54.632 } 00:10:54.632 ], 00:10:54.632 "allow_any_host": true, 00:10:54.632 "hosts": [], 00:10:54.632 "serial_number": "SPDK00000000000001", 00:10:54.632 "model_number": "SPDK bdev Controller", 00:10:54.632 "max_namespaces": 32, 00:10:54.632 "min_cntlid": 1, 00:10:54.632 "max_cntlid": 65519, 00:10:54.632 "namespaces": [ 00:10:54.632 { 00:10:54.632 "nsid": 1, 00:10:54.632 "bdev_name": "Null1", 00:10:54.632 "name": "Null1", 00:10:54.632 "nguid": "69901FA90AFD4CAFAAF4C1F2466A5BF7", 00:10:54.632 "uuid": "69901fa9-0afd-4caf-aaf4-c1f2466a5bf7" 00:10:54.632 } 00:10:54.632 ] 00:10:54.632 }, 00:10:54.632 { 00:10:54.632 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:54.632 "subtype": "NVMe", 00:10:54.632 "listen_addresses": [ 00:10:54.632 { 00:10:54.632 "trtype": "RDMA", 00:10:54.632 "adrfam": "IPv4", 00:10:54.632 "traddr": "192.168.100.8", 00:10:54.632 "trsvcid": "4420" 00:10:54.632 } 00:10:54.632 ], 00:10:54.632 "allow_any_host": true, 00:10:54.632 "hosts": [], 00:10:54.632 "serial_number": "SPDK00000000000002", 00:10:54.632 "model_number": "SPDK bdev Controller", 00:10:54.632 "max_namespaces": 32, 00:10:54.632 "min_cntlid": 1, 00:10:54.632 "max_cntlid": 65519, 00:10:54.632 "namespaces": [ 00:10:54.632 { 00:10:54.632 "nsid": 1, 00:10:54.632 "bdev_name": "Null2", 00:10:54.632 "name": "Null2", 00:10:54.632 "nguid": "45A20F052379492A88CD934A29778BE6", 00:10:54.632 "uuid": "45a20f05-2379-492a-88cd-934a29778be6" 00:10:54.632 } 00:10:54.632 ] 00:10:54.632 }, 00:10:54.632 { 00:10:54.632 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:54.632 "subtype": "NVMe", 00:10:54.632 "listen_addresses": [ 00:10:54.632 { 00:10:54.632 "trtype": "RDMA", 00:10:54.632 "adrfam": "IPv4", 00:10:54.632 "traddr": "192.168.100.8", 00:10:54.632 "trsvcid": "4420" 00:10:54.632 } 00:10:54.632 ], 00:10:54.632 "allow_any_host": true, 00:10:54.632 "hosts": [], 00:10:54.632 "serial_number": "SPDK00000000000003", 00:10:54.632 "model_number": "SPDK bdev Controller", 00:10:54.632 "max_namespaces": 32, 00:10:54.632 "min_cntlid": 1, 00:10:54.632 "max_cntlid": 65519, 00:10:54.632 "namespaces": [ 00:10:54.632 { 00:10:54.632 "nsid": 1, 00:10:54.632 "bdev_name": "Null3", 00:10:54.632 "name": "Null3", 00:10:54.632 "nguid": "3B2F1582B93A4203AA6DAC80272D82BA", 00:10:54.632 "uuid": "3b2f1582-b93a-4203-aa6d-ac80272d82ba" 00:10:54.632 } 00:10:54.632 ] 00:10:54.632 }, 00:10:54.632 { 00:10:54.632 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:54.632 "subtype": "NVMe", 00:10:54.632 "listen_addresses": [ 00:10:54.632 { 00:10:54.632 "trtype": "RDMA", 00:10:54.632 "adrfam": "IPv4", 00:10:54.632 "traddr": "192.168.100.8", 00:10:54.632 "trsvcid": "4420" 00:10:54.632 } 00:10:54.632 ], 00:10:54.632 "allow_any_host": true, 00:10:54.632 "hosts": [], 00:10:54.632 "serial_number": "SPDK00000000000004", 00:10:54.632 "model_number": "SPDK bdev Controller", 00:10:54.632 "max_namespaces": 32, 00:10:54.632 "min_cntlid": 1, 00:10:54.632 "max_cntlid": 65519, 00:10:54.632 "namespaces": [ 00:10:54.632 { 00:10:54.632 "nsid": 1, 00:10:54.632 "bdev_name": "Null4", 00:10:54.632 "name": "Null4", 00:10:54.632 "nguid": "73496BDC1FB44D01AFB24C925464307D", 00:10:54.632 "uuid": "73496bdc-1fb4-4d01-afb2-4c925464307d" 00:10:54.632 } 00:10:54.632 ] 00:10:54.632 } 00:10:54.632 ] 00:10:54.632 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.632 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:54.632 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:54.632 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.632 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.632 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.632 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.632 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:54.632 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.632 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.632 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.632 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:54.632 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:54.632 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.633 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.633 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.633 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:54.633 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.633 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.633 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.633 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:54.633 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:54.633 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.633 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.890 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.890 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:54.890 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.890 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.890 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.890 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:54.890 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:54.890 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.890 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.890 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.890 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:54.890 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.890 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:54.891 rmmod nvme_rdma 00:10:54.891 rmmod nvme_fabrics 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2136423 ']' 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2136423 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 2136423 ']' 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 2136423 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2136423 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2136423' 00:10:54.891 killing process with pid 2136423 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 2136423 00:10:54.891 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 2136423 00:10:55.148 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:55.148 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:55.148 00:10:55.148 real 0m6.498s 00:10:55.148 user 0m5.394s 00:10:55.148 sys 0m4.356s 00:10:55.148 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:55.148 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.148 ************************************ 00:10:55.148 END TEST nvmf_target_discovery 00:10:55.148 ************************************ 00:10:55.148 10:33:02 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:10:55.148 10:33:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:55.148 10:33:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:55.148 10:33:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:55.148 ************************************ 00:10:55.148 START TEST nvmf_referrals 00:10:55.149 ************************************ 00:10:55.149 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:10:55.406 * Looking for test storage... 00:10:55.406 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:55.406 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.406 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:55.406 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.406 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.406 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.406 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.406 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.406 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.406 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.406 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.406 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.406 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.406 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:55.406 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:55.406 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:10:55.407 10:33:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:00.668 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:00.668 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:00.669 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:00.669 Found net devices under 0000:da:00.0: mlx_0_0 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:00.669 Found net devices under 0000:da:00.1: mlx_0_1 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:00.669 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:00.669 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:11:00.669 altname enp218s0f0np0 00:11:00.669 altname ens818f0np0 00:11:00.669 inet 192.168.100.8/24 scope global mlx_0_0 00:11:00.669 valid_lft forever preferred_lft forever 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:00.669 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:00.669 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:00.670 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:11:00.670 altname enp218s0f1np1 00:11:00.670 altname ens818f1np1 00:11:00.670 inet 192.168.100.9/24 scope global mlx_0_1 00:11:00.670 valid_lft forever preferred_lft forever 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:00.670 192.168.100.9' 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:00.670 192.168.100.9' 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:00.670 192.168.100.9' 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2139667 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2139667 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 2139667 ']' 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:00.670 10:33:07 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.670 [2024-07-24 10:33:08.023346] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:11:00.670 [2024-07-24 10:33:08.023390] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.670 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.670 [2024-07-24 10:33:08.078971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.670 [2024-07-24 10:33:08.119165] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.670 [2024-07-24 10:33:08.119206] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.670 [2024-07-24 10:33:08.119217] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.670 [2024-07-24 10:33:08.119223] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.670 [2024-07-24 10:33:08.119227] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.670 [2024-07-24 10:33:08.119276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.670 [2024-07-24 10:33:08.119372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.670 [2024-07-24 10:33:08.119460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.670 [2024-07-24 10:33:08.119461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.929 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:00.929 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:00.929 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:00.929 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:00.929 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.929 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.929 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:00.929 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.929 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.929 [2024-07-24 10:33:08.292748] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12376a0/0x123bb70) succeed. 00:11:00.929 [2024-07-24 10:33:08.301833] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1238c90/0x127d200) succeed. 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.187 [2024-07-24 10:33:08.423268] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.187 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:01.446 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.447 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.447 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:01.447 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:01.447 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:01.447 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:01.447 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:01.447 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:01.447 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:01.447 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:01.704 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:01.704 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:01.704 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:01.704 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:01.704 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:01.704 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:01.705 10:33:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:01.705 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:01.705 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:01.705 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:01.705 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:01.705 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:01.705 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:01.963 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:02.222 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:02.222 rmmod nvme_rdma 00:11:02.480 rmmod nvme_fabrics 00:11:02.480 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:02.480 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:11:02.480 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:11:02.480 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2139667 ']' 00:11:02.480 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2139667 00:11:02.480 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 2139667 ']' 00:11:02.480 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 2139667 00:11:02.480 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:02.480 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:02.480 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2139667 00:11:02.480 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:02.480 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:02.480 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2139667' 00:11:02.480 killing process with pid 2139667 00:11:02.480 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 2139667 00:11:02.480 10:33:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 2139667 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:02.739 00:11:02.739 real 0m7.441s 00:11:02.739 user 0m9.573s 00:11:02.739 sys 0m4.685s 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.739 ************************************ 00:11:02.739 END TEST nvmf_referrals 00:11:02.739 ************************************ 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:02.739 ************************************ 00:11:02.739 START TEST nvmf_connect_disconnect 00:11:02.739 ************************************ 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:11:02.739 * Looking for test storage... 00:11:02.739 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.739 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:11:02.740 10:33:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:08.003 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.003 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:11:08.003 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:08.003 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:08.003 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:08.003 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:08.003 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:08.003 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:11:08.003 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:08.003 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:11:08.003 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:11:08.003 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:11:08.003 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:11:08.003 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:11:08.003 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:11:08.003 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.003 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.003 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.003 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.003 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.003 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.003 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:08.004 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:08.004 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:08.004 Found net devices under 0000:da:00.0: mlx_0_0 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:08.004 Found net devices under 0000:da:00.1: mlx_0_1 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:08.004 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:08.005 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:08.005 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:11:08.005 altname enp218s0f0np0 00:11:08.005 altname ens818f0np0 00:11:08.005 inet 192.168.100.8/24 scope global mlx_0_0 00:11:08.005 valid_lft forever preferred_lft forever 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:08.005 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:08.005 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:11:08.005 altname enp218s0f1np1 00:11:08.005 altname ens818f1np1 00:11:08.005 inet 192.168.100.9/24 scope global mlx_0_1 00:11:08.005 valid_lft forever preferred_lft forever 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:08.005 192.168.100.9' 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:08.005 192.168.100.9' 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:08.005 192.168.100.9' 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2143425 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2143425 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2143425 ']' 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:08.005 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.006 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:08.006 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:08.006 10:33:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:08.006 [2024-07-24 10:33:14.878584] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:11:08.006 [2024-07-24 10:33:14.878631] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.006 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.006 [2024-07-24 10:33:14.933341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.006 [2024-07-24 10:33:14.974344] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.006 [2024-07-24 10:33:14.974381] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.006 [2024-07-24 10:33:14.974388] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.006 [2024-07-24 10:33:14.974394] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.006 [2024-07-24 10:33:14.974399] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.006 [2024-07-24 10:33:14.974434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.006 [2024-07-24 10:33:14.974534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.006 [2024-07-24 10:33:14.974607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.006 [2024-07-24 10:33:14.974608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:08.006 [2024-07-24 10:33:15.116136] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:11:08.006 [2024-07-24 10:33:15.136184] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17046a0/0x1708b70) succeed. 00:11:08.006 [2024-07-24 10:33:15.145297] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1705c90/0x174a200) succeed. 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:08.006 [2024-07-24 10:33:15.284344] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:08.006 10:33:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:11.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:19.859 rmmod nvme_rdma 00:16:19.859 rmmod nvme_fabrics 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2143425 ']' 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2143425 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2143425 ']' 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2143425 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2143425 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2143425' 00:16:19.859 killing process with pid 2143425 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2143425 00:16:19.859 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2143425 00:16:20.117 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:20.118 00:16:20.118 real 5m17.330s 00:16:20.118 user 20m45.053s 00:16:20.118 sys 0m13.347s 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:20.118 ************************************ 00:16:20.118 END TEST nvmf_connect_disconnect 00:16:20.118 ************************************ 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:20.118 ************************************ 00:16:20.118 START TEST nvmf_multitarget 00:16:20.118 ************************************ 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:16:20.118 * Looking for test storage... 00:16:20.118 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:20.118 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:20.376 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.376 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.376 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.376 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.376 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.376 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.376 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:20.376 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.376 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:16:20.376 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:20.376 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:20.376 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:20.376 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.376 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.376 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:20.377 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:20.377 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:20.377 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:20.377 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:20.377 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:20.377 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:20.377 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:20.377 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:20.377 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:20.377 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.377 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:20.377 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.377 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:20.377 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:20.377 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:16:20.377 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:16:25.644 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:16:25.644 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:16:25.644 Found net devices under 0000:da:00.0: mlx_0_0 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:16:25.644 Found net devices under 0000:da:00.1: mlx_0_1 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:25.644 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:25.645 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:25.645 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:16:25.645 altname enp218s0f0np0 00:16:25.645 altname ens818f0np0 00:16:25.645 inet 192.168.100.8/24 scope global mlx_0_0 00:16:25.645 valid_lft forever preferred_lft forever 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:25.645 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:25.645 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:16:25.645 altname enp218s0f1np1 00:16:25.645 altname ens818f1np1 00:16:25.645 inet 192.168.100.9/24 scope global mlx_0_1 00:16:25.645 valid_lft forever preferred_lft forever 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:25.645 192.168.100.9' 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:25.645 192.168.100.9' 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:25.645 192.168.100.9' 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2199686 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2199686 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2199686 ']' 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:25.645 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:25.645 [2024-07-24 10:38:32.974013] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:16:25.646 [2024-07-24 10:38:32.974057] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.646 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.646 [2024-07-24 10:38:33.029544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:25.646 [2024-07-24 10:38:33.070724] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.646 [2024-07-24 10:38:33.070763] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.646 [2024-07-24 10:38:33.070770] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.646 [2024-07-24 10:38:33.070776] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.646 [2024-07-24 10:38:33.070781] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.646 [2024-07-24 10:38:33.070825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.646 [2024-07-24 10:38:33.070923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.646 [2024-07-24 10:38:33.071028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:25.646 [2024-07-24 10:38:33.071029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.904 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:25.904 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:16:25.904 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:25.904 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:25.904 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:25.905 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.905 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:25.905 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:25.905 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:25.905 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:25.905 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:26.162 "nvmf_tgt_1" 00:16:26.162 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:26.162 "nvmf_tgt_2" 00:16:26.162 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:26.162 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:26.420 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:26.420 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:26.420 true 00:16:26.420 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:26.420 true 00:16:26.420 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:26.420 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:26.678 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:26.678 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:26.678 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:26.678 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:26.678 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:16:26.678 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:26.678 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:26.678 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:16:26.678 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:26.679 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:26.679 rmmod nvme_rdma 00:16:26.679 rmmod nvme_fabrics 00:16:26.679 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:26.679 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:16:26.679 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:16:26.679 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2199686 ']' 00:16:26.679 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2199686 00:16:26.679 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2199686 ']' 00:16:26.679 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2199686 00:16:26.679 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:16:26.679 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:26.679 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2199686 00:16:26.679 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:26.679 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:26.679 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2199686' 00:16:26.679 killing process with pid 2199686 00:16:26.679 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2199686 00:16:26.679 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2199686 00:16:26.937 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:26.937 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:26.937 00:16:26.937 real 0m6.739s 00:16:26.937 user 0m6.651s 00:16:26.937 sys 0m4.450s 00:16:26.937 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:26.937 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:26.937 ************************************ 00:16:26.937 END TEST nvmf_multitarget 00:16:26.937 ************************************ 00:16:26.937 10:38:34 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:16:26.937 10:38:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:26.937 10:38:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:26.937 10:38:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:26.937 ************************************ 00:16:26.937 START TEST nvmf_rpc 00:16:26.937 ************************************ 00:16:26.937 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:16:26.937 * Looking for test storage... 00:16:26.937 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:26.937 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:26.937 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:26.937 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.937 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.937 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.937 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.937 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.937 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.937 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.937 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:16:26.938 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:16:32.207 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:16:32.207 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:16:32.207 Found net devices under 0000:da:00.0: mlx_0_0 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:16:32.207 Found net devices under 0000:da:00.1: mlx_0_1 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:32.207 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:32.208 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:32.208 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:16:32.208 altname enp218s0f0np0 00:16:32.208 altname ens818f0np0 00:16:32.208 inet 192.168.100.8/24 scope global mlx_0_0 00:16:32.208 valid_lft forever preferred_lft forever 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:32.208 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:32.208 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:16:32.208 altname enp218s0f1np1 00:16:32.208 altname ens818f1np1 00:16:32.208 inet 192.168.100.9/24 scope global mlx_0_1 00:16:32.208 valid_lft forever preferred_lft forever 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:32.208 192.168.100.9' 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:32.208 192.168.100.9' 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:32.208 192.168.100.9' 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2202992 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2202992 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2202992 ']' 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:32.208 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.209 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:32.209 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.209 [2024-07-24 10:38:39.655674] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:16:32.209 [2024-07-24 10:38:39.655713] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.467 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.467 [2024-07-24 10:38:39.711822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:32.467 [2024-07-24 10:38:39.753349] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:32.467 [2024-07-24 10:38:39.753389] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:32.467 [2024-07-24 10:38:39.753396] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:32.467 [2024-07-24 10:38:39.753401] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:32.467 [2024-07-24 10:38:39.753408] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:32.467 [2024-07-24 10:38:39.753443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.467 [2024-07-24 10:38:39.753463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.467 [2024-07-24 10:38:39.753538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:32.467 [2024-07-24 10:38:39.753540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.467 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:32.467 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:32.467 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:32.467 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:32.467 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.467 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.467 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:32.467 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.467 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.467 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.467 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:32.467 "tick_rate": 2100000000, 00:16:32.467 "poll_groups": [ 00:16:32.467 { 00:16:32.467 "name": "nvmf_tgt_poll_group_000", 00:16:32.467 "admin_qpairs": 0, 00:16:32.467 "io_qpairs": 0, 00:16:32.467 "current_admin_qpairs": 0, 00:16:32.467 "current_io_qpairs": 0, 00:16:32.467 "pending_bdev_io": 0, 00:16:32.467 "completed_nvme_io": 0, 00:16:32.467 "transports": [] 00:16:32.467 }, 00:16:32.467 { 00:16:32.467 "name": "nvmf_tgt_poll_group_001", 00:16:32.467 "admin_qpairs": 0, 00:16:32.467 "io_qpairs": 0, 00:16:32.467 "current_admin_qpairs": 0, 00:16:32.467 "current_io_qpairs": 0, 00:16:32.467 "pending_bdev_io": 0, 00:16:32.467 "completed_nvme_io": 0, 00:16:32.467 "transports": [] 00:16:32.467 }, 00:16:32.467 { 00:16:32.467 "name": "nvmf_tgt_poll_group_002", 00:16:32.467 "admin_qpairs": 0, 00:16:32.467 "io_qpairs": 0, 00:16:32.467 "current_admin_qpairs": 0, 00:16:32.467 "current_io_qpairs": 0, 00:16:32.467 "pending_bdev_io": 0, 00:16:32.467 "completed_nvme_io": 0, 00:16:32.467 "transports": [] 00:16:32.467 }, 00:16:32.467 { 00:16:32.467 "name": "nvmf_tgt_poll_group_003", 00:16:32.467 "admin_qpairs": 0, 00:16:32.467 "io_qpairs": 0, 00:16:32.467 "current_admin_qpairs": 0, 00:16:32.467 "current_io_qpairs": 0, 00:16:32.467 "pending_bdev_io": 0, 00:16:32.467 "completed_nvme_io": 0, 00:16:32.467 "transports": [] 00:16:32.467 } 00:16:32.467 ] 00:16:32.467 }' 00:16:32.467 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:32.467 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:32.467 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:32.467 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:32.725 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:32.725 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:32.725 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:32.725 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:32.725 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.725 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.725 [2024-07-24 10:38:40.026459] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5bb700/0x5bfbd0) succeed. 00:16:32.725 [2024-07-24 10:38:40.035874] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5bccf0/0x601260) succeed. 00:16:32.725 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.725 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:32.725 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.725 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.983 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.983 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:32.983 "tick_rate": 2100000000, 00:16:32.983 "poll_groups": [ 00:16:32.983 { 00:16:32.983 "name": "nvmf_tgt_poll_group_000", 00:16:32.983 "admin_qpairs": 0, 00:16:32.983 "io_qpairs": 0, 00:16:32.983 "current_admin_qpairs": 0, 00:16:32.983 "current_io_qpairs": 0, 00:16:32.983 "pending_bdev_io": 0, 00:16:32.983 "completed_nvme_io": 0, 00:16:32.983 "transports": [ 00:16:32.983 { 00:16:32.983 "trtype": "RDMA", 00:16:32.983 "pending_data_buffer": 0, 00:16:32.983 "devices": [ 00:16:32.983 { 00:16:32.983 "name": "mlx5_0", 00:16:32.983 "polls": 15132, 00:16:32.983 "idle_polls": 15132, 00:16:32.983 "completions": 0, 00:16:32.983 "requests": 0, 00:16:32.983 "request_latency": 0, 00:16:32.983 "pending_free_request": 0, 00:16:32.983 "pending_rdma_read": 0, 00:16:32.983 "pending_rdma_write": 0, 00:16:32.983 "pending_rdma_send": 0, 00:16:32.983 "total_send_wrs": 0, 00:16:32.983 "send_doorbell_updates": 0, 00:16:32.983 "total_recv_wrs": 4096, 00:16:32.983 "recv_doorbell_updates": 1 00:16:32.983 }, 00:16:32.983 { 00:16:32.983 "name": "mlx5_1", 00:16:32.983 "polls": 15132, 00:16:32.983 "idle_polls": 15132, 00:16:32.983 "completions": 0, 00:16:32.983 "requests": 0, 00:16:32.983 "request_latency": 0, 00:16:32.983 "pending_free_request": 0, 00:16:32.983 "pending_rdma_read": 0, 00:16:32.983 "pending_rdma_write": 0, 00:16:32.983 "pending_rdma_send": 0, 00:16:32.983 "total_send_wrs": 0, 00:16:32.983 "send_doorbell_updates": 0, 00:16:32.983 "total_recv_wrs": 4096, 00:16:32.983 "recv_doorbell_updates": 1 00:16:32.983 } 00:16:32.983 ] 00:16:32.983 } 00:16:32.983 ] 00:16:32.983 }, 00:16:32.983 { 00:16:32.983 "name": "nvmf_tgt_poll_group_001", 00:16:32.983 "admin_qpairs": 0, 00:16:32.983 "io_qpairs": 0, 00:16:32.983 "current_admin_qpairs": 0, 00:16:32.983 "current_io_qpairs": 0, 00:16:32.983 "pending_bdev_io": 0, 00:16:32.983 "completed_nvme_io": 0, 00:16:32.983 "transports": [ 00:16:32.983 { 00:16:32.983 "trtype": "RDMA", 00:16:32.983 "pending_data_buffer": 0, 00:16:32.983 "devices": [ 00:16:32.983 { 00:16:32.983 "name": "mlx5_0", 00:16:32.983 "polls": 9989, 00:16:32.983 "idle_polls": 9989, 00:16:32.983 "completions": 0, 00:16:32.983 "requests": 0, 00:16:32.983 "request_latency": 0, 00:16:32.983 "pending_free_request": 0, 00:16:32.983 "pending_rdma_read": 0, 00:16:32.983 "pending_rdma_write": 0, 00:16:32.983 "pending_rdma_send": 0, 00:16:32.983 "total_send_wrs": 0, 00:16:32.983 "send_doorbell_updates": 0, 00:16:32.983 "total_recv_wrs": 4096, 00:16:32.983 "recv_doorbell_updates": 1 00:16:32.983 }, 00:16:32.983 { 00:16:32.983 "name": "mlx5_1", 00:16:32.983 "polls": 9989, 00:16:32.983 "idle_polls": 9989, 00:16:32.983 "completions": 0, 00:16:32.983 "requests": 0, 00:16:32.983 "request_latency": 0, 00:16:32.983 "pending_free_request": 0, 00:16:32.983 "pending_rdma_read": 0, 00:16:32.983 "pending_rdma_write": 0, 00:16:32.983 "pending_rdma_send": 0, 00:16:32.983 "total_send_wrs": 0, 00:16:32.983 "send_doorbell_updates": 0, 00:16:32.983 "total_recv_wrs": 4096, 00:16:32.983 "recv_doorbell_updates": 1 00:16:32.983 } 00:16:32.983 ] 00:16:32.983 } 00:16:32.983 ] 00:16:32.983 }, 00:16:32.983 { 00:16:32.983 "name": "nvmf_tgt_poll_group_002", 00:16:32.983 "admin_qpairs": 0, 00:16:32.983 "io_qpairs": 0, 00:16:32.983 "current_admin_qpairs": 0, 00:16:32.983 "current_io_qpairs": 0, 00:16:32.983 "pending_bdev_io": 0, 00:16:32.983 "completed_nvme_io": 0, 00:16:32.983 "transports": [ 00:16:32.983 { 00:16:32.983 "trtype": "RDMA", 00:16:32.983 "pending_data_buffer": 0, 00:16:32.983 "devices": [ 00:16:32.983 { 00:16:32.983 "name": "mlx5_0", 00:16:32.983 "polls": 5298, 00:16:32.983 "idle_polls": 5298, 00:16:32.983 "completions": 0, 00:16:32.983 "requests": 0, 00:16:32.983 "request_latency": 0, 00:16:32.983 "pending_free_request": 0, 00:16:32.983 "pending_rdma_read": 0, 00:16:32.983 "pending_rdma_write": 0, 00:16:32.983 "pending_rdma_send": 0, 00:16:32.983 "total_send_wrs": 0, 00:16:32.983 "send_doorbell_updates": 0, 00:16:32.983 "total_recv_wrs": 4096, 00:16:32.983 "recv_doorbell_updates": 1 00:16:32.983 }, 00:16:32.983 { 00:16:32.983 "name": "mlx5_1", 00:16:32.983 "polls": 5298, 00:16:32.983 "idle_polls": 5298, 00:16:32.983 "completions": 0, 00:16:32.983 "requests": 0, 00:16:32.983 "request_latency": 0, 00:16:32.983 "pending_free_request": 0, 00:16:32.983 "pending_rdma_read": 0, 00:16:32.983 "pending_rdma_write": 0, 00:16:32.983 "pending_rdma_send": 0, 00:16:32.983 "total_send_wrs": 0, 00:16:32.983 "send_doorbell_updates": 0, 00:16:32.983 "total_recv_wrs": 4096, 00:16:32.983 "recv_doorbell_updates": 1 00:16:32.983 } 00:16:32.983 ] 00:16:32.983 } 00:16:32.983 ] 00:16:32.983 }, 00:16:32.983 { 00:16:32.983 "name": "nvmf_tgt_poll_group_003", 00:16:32.983 "admin_qpairs": 0, 00:16:32.983 "io_qpairs": 0, 00:16:32.983 "current_admin_qpairs": 0, 00:16:32.983 "current_io_qpairs": 0, 00:16:32.983 "pending_bdev_io": 0, 00:16:32.983 "completed_nvme_io": 0, 00:16:32.983 "transports": [ 00:16:32.983 { 00:16:32.983 "trtype": "RDMA", 00:16:32.984 "pending_data_buffer": 0, 00:16:32.984 "devices": [ 00:16:32.984 { 00:16:32.984 "name": "mlx5_0", 00:16:32.984 "polls": 904, 00:16:32.984 "idle_polls": 904, 00:16:32.984 "completions": 0, 00:16:32.984 "requests": 0, 00:16:32.984 "request_latency": 0, 00:16:32.984 "pending_free_request": 0, 00:16:32.984 "pending_rdma_read": 0, 00:16:32.984 "pending_rdma_write": 0, 00:16:32.984 "pending_rdma_send": 0, 00:16:32.984 "total_send_wrs": 0, 00:16:32.984 "send_doorbell_updates": 0, 00:16:32.984 "total_recv_wrs": 4096, 00:16:32.984 "recv_doorbell_updates": 1 00:16:32.984 }, 00:16:32.984 { 00:16:32.984 "name": "mlx5_1", 00:16:32.984 "polls": 904, 00:16:32.984 "idle_polls": 904, 00:16:32.984 "completions": 0, 00:16:32.984 "requests": 0, 00:16:32.984 "request_latency": 0, 00:16:32.984 "pending_free_request": 0, 00:16:32.984 "pending_rdma_read": 0, 00:16:32.984 "pending_rdma_write": 0, 00:16:32.984 "pending_rdma_send": 0, 00:16:32.984 "total_send_wrs": 0, 00:16:32.984 "send_doorbell_updates": 0, 00:16:32.984 "total_recv_wrs": 4096, 00:16:32.984 "recv_doorbell_updates": 1 00:16:32.984 } 00:16:32.984 ] 00:16:32.984 } 00:16:32.984 ] 00:16:32.984 } 00:16:32.984 ] 00:16:32.984 }' 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.984 Malloc1 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:32.984 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.241 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.242 [2024-07-24 10:38:40.471864] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:16:33.242 [2024-07-24 10:38:40.517830] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562' 00:16:33.242 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:33.242 could not add new controller: failed to write to nvme-fabrics device 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.242 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:34.175 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:34.175 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:34.175 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:34.175 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:34.175 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:36.700 10:38:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:36.700 10:38:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:36.700 10:38:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:36.700 10:38:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:36.700 10:38:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:36.700 10:38:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:36.700 10:38:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:37.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:37.265 [2024-07-24 10:38:44.569316] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562' 00:16:37.265 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:37.265 could not add new controller: failed to write to nvme-fabrics device 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.265 10:38:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:38.198 10:38:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:38.198 10:38:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:38.198 10:38:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:38.198 10:38:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:38.198 10:38:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:40.722 10:38:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:40.722 10:38:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:40.722 10:38:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:40.722 10:38:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:40.722 10:38:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:40.722 10:38:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:40.722 10:38:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:41.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.287 [2024-07-24 10:38:48.591941] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.287 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.288 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:42.221 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:42.221 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:42.221 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:42.221 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:42.221 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:44.747 10:38:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:44.747 10:38:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:44.747 10:38:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:44.747 10:38:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:44.747 10:38:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:44.747 10:38:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:44.747 10:38:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:45.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.311 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:45.311 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.312 [2024-07-24 10:38:52.595550] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.312 10:38:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:46.244 10:38:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:46.244 10:38:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:46.244 10:38:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:46.244 10:38:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:46.244 10:38:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:48.142 10:38:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:48.142 10:38:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:48.142 10:38:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:48.142 10:38:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:48.142 10:38:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:48.400 10:38:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:48.400 10:38:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:49.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.413 [2024-07-24 10:38:56.609335] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.413 10:38:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:50.349 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:50.349 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:50.349 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:50.349 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:50.349 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:52.249 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:52.249 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:52.249 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:52.249 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:52.249 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:52.249 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:52.249 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:53.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.184 [2024-07-24 10:39:00.612325] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.184 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:54.559 10:39:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:54.559 10:39:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:54.559 10:39:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:54.559 10:39:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:54.559 10:39:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:56.458 10:39:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:56.458 10:39:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:56.458 10:39:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:56.458 10:39:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:56.458 10:39:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:56.458 10:39:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:56.458 10:39:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:57.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.393 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:57.393 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:57.393 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:57.393 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.393 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:57.393 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.394 [2024-07-24 10:39:04.615801] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.394 10:39:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:58.328 10:39:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:58.328 10:39:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:58.328 10:39:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:58.328 10:39:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:58.328 10:39:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:00.229 10:39:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:00.229 10:39:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:00.229 10:39:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:00.229 10:39:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:00.229 10:39:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:00.229 10:39:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:00.230 10:39:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:01.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:01.164 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:01.164 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:01.164 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:01.164 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:01.164 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:01.164 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:01.164 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:01.164 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:01.164 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.164 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.164 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.164 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.164 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.164 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.164 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.164 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:01.164 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:01.164 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:01.164 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.164 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.422 [2024-07-24 10:39:08.628813] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.422 [2024-07-24 10:39:08.676959] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.422 [2024-07-24 10:39:08.729189] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:01.422 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.423 [2024-07-24 10:39:08.777449] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.423 [2024-07-24 10:39:08.825521] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.423 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.681 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.681 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:01.681 "tick_rate": 2100000000, 00:17:01.681 "poll_groups": [ 00:17:01.681 { 00:17:01.681 "name": "nvmf_tgt_poll_group_000", 00:17:01.681 "admin_qpairs": 2, 00:17:01.681 "io_qpairs": 27, 00:17:01.681 "current_admin_qpairs": 0, 00:17:01.681 "current_io_qpairs": 0, 00:17:01.681 "pending_bdev_io": 0, 00:17:01.681 "completed_nvme_io": 210, 00:17:01.681 "transports": [ 00:17:01.681 { 00:17:01.681 "trtype": "RDMA", 00:17:01.681 "pending_data_buffer": 0, 00:17:01.681 "devices": [ 00:17:01.681 { 00:17:01.681 "name": "mlx5_0", 00:17:01.681 "polls": 3443694, 00:17:01.681 "idle_polls": 3443246, 00:17:01.681 "completions": 529, 00:17:01.681 "requests": 264, 00:17:01.681 "request_latency": 54881792, 00:17:01.681 "pending_free_request": 0, 00:17:01.681 "pending_rdma_read": 0, 00:17:01.681 "pending_rdma_write": 0, 00:17:01.681 "pending_rdma_send": 0, 00:17:01.681 "total_send_wrs": 473, 00:17:01.681 "send_doorbell_updates": 215, 00:17:01.681 "total_recv_wrs": 4360, 00:17:01.681 "recv_doorbell_updates": 215 00:17:01.681 }, 00:17:01.681 { 00:17:01.681 "name": "mlx5_1", 00:17:01.681 "polls": 3443694, 00:17:01.681 "idle_polls": 3443694, 00:17:01.681 "completions": 0, 00:17:01.681 "requests": 0, 00:17:01.681 "request_latency": 0, 00:17:01.681 "pending_free_request": 0, 00:17:01.681 "pending_rdma_read": 0, 00:17:01.681 "pending_rdma_write": 0, 00:17:01.681 "pending_rdma_send": 0, 00:17:01.681 "total_send_wrs": 0, 00:17:01.681 "send_doorbell_updates": 0, 00:17:01.681 "total_recv_wrs": 4096, 00:17:01.681 "recv_doorbell_updates": 1 00:17:01.681 } 00:17:01.681 ] 00:17:01.681 } 00:17:01.681 ] 00:17:01.681 }, 00:17:01.681 { 00:17:01.681 "name": "nvmf_tgt_poll_group_001", 00:17:01.681 "admin_qpairs": 2, 00:17:01.681 "io_qpairs": 26, 00:17:01.681 "current_admin_qpairs": 0, 00:17:01.681 "current_io_qpairs": 0, 00:17:01.681 "pending_bdev_io": 0, 00:17:01.681 "completed_nvme_io": 78, 00:17:01.681 "transports": [ 00:17:01.681 { 00:17:01.681 "trtype": "RDMA", 00:17:01.681 "pending_data_buffer": 0, 00:17:01.681 "devices": [ 00:17:01.681 { 00:17:01.681 "name": "mlx5_0", 00:17:01.681 "polls": 3485938, 00:17:01.681 "idle_polls": 3485694, 00:17:01.681 "completions": 264, 00:17:01.681 "requests": 132, 00:17:01.681 "request_latency": 19122206, 00:17:01.681 "pending_free_request": 0, 00:17:01.681 "pending_rdma_read": 0, 00:17:01.681 "pending_rdma_write": 0, 00:17:01.681 "pending_rdma_send": 0, 00:17:01.681 "total_send_wrs": 210, 00:17:01.682 "send_doorbell_updates": 120, 00:17:01.682 "total_recv_wrs": 4228, 00:17:01.682 "recv_doorbell_updates": 121 00:17:01.682 }, 00:17:01.682 { 00:17:01.682 "name": "mlx5_1", 00:17:01.682 "polls": 3485938, 00:17:01.682 "idle_polls": 3485938, 00:17:01.682 "completions": 0, 00:17:01.682 "requests": 0, 00:17:01.682 "request_latency": 0, 00:17:01.682 "pending_free_request": 0, 00:17:01.682 "pending_rdma_read": 0, 00:17:01.682 "pending_rdma_write": 0, 00:17:01.682 "pending_rdma_send": 0, 00:17:01.682 "total_send_wrs": 0, 00:17:01.682 "send_doorbell_updates": 0, 00:17:01.682 "total_recv_wrs": 4096, 00:17:01.682 "recv_doorbell_updates": 1 00:17:01.682 } 00:17:01.682 ] 00:17:01.682 } 00:17:01.682 ] 00:17:01.682 }, 00:17:01.682 { 00:17:01.682 "name": "nvmf_tgt_poll_group_002", 00:17:01.682 "admin_qpairs": 1, 00:17:01.682 "io_qpairs": 26, 00:17:01.682 "current_admin_qpairs": 0, 00:17:01.682 "current_io_qpairs": 0, 00:17:01.682 "pending_bdev_io": 0, 00:17:01.682 "completed_nvme_io": 90, 00:17:01.682 "transports": [ 00:17:01.682 { 00:17:01.682 "trtype": "RDMA", 00:17:01.682 "pending_data_buffer": 0, 00:17:01.682 "devices": [ 00:17:01.682 { 00:17:01.682 "name": "mlx5_0", 00:17:01.682 "polls": 3516729, 00:17:01.682 "idle_polls": 3516511, 00:17:01.682 "completions": 237, 00:17:01.682 "requests": 118, 00:17:01.682 "request_latency": 19077676, 00:17:01.682 "pending_free_request": 0, 00:17:01.682 "pending_rdma_read": 0, 00:17:01.682 "pending_rdma_write": 0, 00:17:01.682 "pending_rdma_send": 0, 00:17:01.682 "total_send_wrs": 196, 00:17:01.682 "send_doorbell_updates": 107, 00:17:01.682 "total_recv_wrs": 4214, 00:17:01.682 "recv_doorbell_updates": 107 00:17:01.682 }, 00:17:01.682 { 00:17:01.682 "name": "mlx5_1", 00:17:01.682 "polls": 3516729, 00:17:01.682 "idle_polls": 3516729, 00:17:01.682 "completions": 0, 00:17:01.682 "requests": 0, 00:17:01.682 "request_latency": 0, 00:17:01.682 "pending_free_request": 0, 00:17:01.682 "pending_rdma_read": 0, 00:17:01.682 "pending_rdma_write": 0, 00:17:01.682 "pending_rdma_send": 0, 00:17:01.682 "total_send_wrs": 0, 00:17:01.682 "send_doorbell_updates": 0, 00:17:01.682 "total_recv_wrs": 4096, 00:17:01.682 "recv_doorbell_updates": 1 00:17:01.682 } 00:17:01.682 ] 00:17:01.682 } 00:17:01.682 ] 00:17:01.682 }, 00:17:01.682 { 00:17:01.682 "name": "nvmf_tgt_poll_group_003", 00:17:01.682 "admin_qpairs": 2, 00:17:01.682 "io_qpairs": 26, 00:17:01.682 "current_admin_qpairs": 0, 00:17:01.682 "current_io_qpairs": 0, 00:17:01.682 "pending_bdev_io": 0, 00:17:01.682 "completed_nvme_io": 77, 00:17:01.682 "transports": [ 00:17:01.682 { 00:17:01.682 "trtype": "RDMA", 00:17:01.682 "pending_data_buffer": 0, 00:17:01.682 "devices": [ 00:17:01.682 { 00:17:01.682 "name": "mlx5_0", 00:17:01.682 "polls": 2766854, 00:17:01.682 "idle_polls": 2766615, 00:17:01.682 "completions": 260, 00:17:01.682 "requests": 130, 00:17:01.682 "request_latency": 19911566, 00:17:01.682 "pending_free_request": 0, 00:17:01.682 "pending_rdma_read": 0, 00:17:01.682 "pending_rdma_write": 0, 00:17:01.682 "pending_rdma_send": 0, 00:17:01.682 "total_send_wrs": 206, 00:17:01.682 "send_doorbell_updates": 117, 00:17:01.682 "total_recv_wrs": 4226, 00:17:01.682 "recv_doorbell_updates": 118 00:17:01.682 }, 00:17:01.682 { 00:17:01.682 "name": "mlx5_1", 00:17:01.682 "polls": 2766854, 00:17:01.682 "idle_polls": 2766854, 00:17:01.682 "completions": 0, 00:17:01.682 "requests": 0, 00:17:01.682 "request_latency": 0, 00:17:01.682 "pending_free_request": 0, 00:17:01.682 "pending_rdma_read": 0, 00:17:01.682 "pending_rdma_write": 0, 00:17:01.682 "pending_rdma_send": 0, 00:17:01.682 "total_send_wrs": 0, 00:17:01.682 "send_doorbell_updates": 0, 00:17:01.682 "total_recv_wrs": 4096, 00:17:01.682 "recv_doorbell_updates": 1 00:17:01.682 } 00:17:01.682 ] 00:17:01.682 } 00:17:01.682 ] 00:17:01.682 } 00:17:01.682 ] 00:17:01.682 }' 00:17:01.682 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:01.682 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:01.682 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:01.682 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:01.682 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:01.682 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:01.682 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:01.682 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:01.682 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:01.682 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:17:01.682 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:17:01.682 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:17:01.682 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:17:01.682 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:17:01.682 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1290 > 0 )) 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 112993240 > 0 )) 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:01.682 rmmod nvme_rdma 00:17:01.682 rmmod nvme_fabrics 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2202992 ']' 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2202992 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2202992 ']' 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2202992 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:01.682 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2202992 00:17:01.940 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:01.940 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:01.940 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2202992' 00:17:01.940 killing process with pid 2202992 00:17:01.940 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2202992 00:17:01.940 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2202992 00:17:02.198 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:02.198 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:02.198 00:17:02.198 real 0m35.165s 00:17:02.198 user 1m59.971s 00:17:02.198 sys 0m5.237s 00:17:02.198 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:02.198 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.198 ************************************ 00:17:02.198 END TEST nvmf_rpc 00:17:02.198 ************************************ 00:17:02.198 10:39:09 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:17:02.198 10:39:09 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:02.198 10:39:09 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:02.198 10:39:09 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:02.198 ************************************ 00:17:02.198 START TEST nvmf_invalid 00:17:02.198 ************************************ 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:17:02.199 * Looking for test storage... 00:17:02.199 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:17:02.199 10:39:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:07.460 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:07.460 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:17:07.461 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:17:07.461 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:17:07.461 Found net devices under 0000:da:00.0: mlx_0_0 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:17:07.461 Found net devices under 0000:da:00.1: mlx_0_1 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:07.461 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:07.462 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:07.462 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:17:07.462 altname enp218s0f0np0 00:17:07.462 altname ens818f0np0 00:17:07.462 inet 192.168.100.8/24 scope global mlx_0_0 00:17:07.462 valid_lft forever preferred_lft forever 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:07.462 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:07.462 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:17:07.462 altname enp218s0f1np1 00:17:07.462 altname ens818f1np1 00:17:07.462 inet 192.168.100.9/24 scope global mlx_0_1 00:17:07.462 valid_lft forever preferred_lft forever 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:07.462 192.168.100.9' 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:07.462 192.168.100.9' 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:07.462 192.168.100.9' 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2211566 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2211566 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2211566 ']' 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:07.462 10:39:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:07.720 [2024-07-24 10:39:14.919843] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:17:07.720 [2024-07-24 10:39:14.919897] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.720 EAL: No free 2048 kB hugepages reported on node 1 00:17:07.720 [2024-07-24 10:39:14.975614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:07.720 [2024-07-24 10:39:15.020584] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.720 [2024-07-24 10:39:15.020622] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.720 [2024-07-24 10:39:15.020628] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:07.720 [2024-07-24 10:39:15.020634] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:07.720 [2024-07-24 10:39:15.020639] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.720 [2024-07-24 10:39:15.020687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.720 [2024-07-24 10:39:15.020786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.720 [2024-07-24 10:39:15.020873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:07.720 [2024-07-24 10:39:15.020873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.720 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:07.720 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:17:07.720 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:07.720 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:07.720 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:07.720 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.720 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:07.720 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17752 00:17:07.978 [2024-07-24 10:39:15.309353] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:07.978 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:07.978 { 00:17:07.978 "nqn": "nqn.2016-06.io.spdk:cnode17752", 00:17:07.978 "tgt_name": "foobar", 00:17:07.978 "method": "nvmf_create_subsystem", 00:17:07.978 "req_id": 1 00:17:07.978 } 00:17:07.978 Got JSON-RPC error response 00:17:07.978 response: 00:17:07.978 { 00:17:07.978 "code": -32603, 00:17:07.978 "message": "Unable to find target foobar" 00:17:07.978 }' 00:17:07.978 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:07.978 { 00:17:07.978 "nqn": "nqn.2016-06.io.spdk:cnode17752", 00:17:07.978 "tgt_name": "foobar", 00:17:07.978 "method": "nvmf_create_subsystem", 00:17:07.978 "req_id": 1 00:17:07.978 } 00:17:07.978 Got JSON-RPC error response 00:17:07.978 response: 00:17:07.978 { 00:17:07.978 "code": -32603, 00:17:07.978 "message": "Unable to find target foobar" 00:17:07.978 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:07.978 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:07.978 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9383 00:17:08.235 [2024-07-24 10:39:15.506055] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9383: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:08.235 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:08.235 { 00:17:08.235 "nqn": "nqn.2016-06.io.spdk:cnode9383", 00:17:08.235 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:08.235 "method": "nvmf_create_subsystem", 00:17:08.235 "req_id": 1 00:17:08.235 } 00:17:08.235 Got JSON-RPC error response 00:17:08.235 response: 00:17:08.235 { 00:17:08.235 "code": -32602, 00:17:08.235 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:08.235 }' 00:17:08.235 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:08.235 { 00:17:08.235 "nqn": "nqn.2016-06.io.spdk:cnode9383", 00:17:08.235 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:08.235 "method": "nvmf_create_subsystem", 00:17:08.235 "req_id": 1 00:17:08.235 } 00:17:08.235 Got JSON-RPC error response 00:17:08.235 response: 00:17:08.235 { 00:17:08.235 "code": -32602, 00:17:08.235 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:08.235 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:08.235 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:08.235 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27255 00:17:08.493 [2024-07-24 10:39:15.706735] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27255: invalid model number 'SPDK_Controller' 00:17:08.493 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:08.493 { 00:17:08.493 "nqn": "nqn.2016-06.io.spdk:cnode27255", 00:17:08.493 "model_number": "SPDK_Controller\u001f", 00:17:08.493 "method": "nvmf_create_subsystem", 00:17:08.493 "req_id": 1 00:17:08.493 } 00:17:08.493 Got JSON-RPC error response 00:17:08.493 response: 00:17:08.493 { 00:17:08.493 "code": -32602, 00:17:08.493 "message": "Invalid MN SPDK_Controller\u001f" 00:17:08.493 }' 00:17:08.493 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:08.493 { 00:17:08.493 "nqn": "nqn.2016-06.io.spdk:cnode27255", 00:17:08.493 "model_number": "SPDK_Controller\u001f", 00:17:08.493 "method": "nvmf_create_subsystem", 00:17:08.493 "req_id": 1 00:17:08.494 } 00:17:08.494 Got JSON-RPC error response 00:17:08.494 response: 00:17:08.494 { 00:17:08.494 "code": -32602, 00:17:08.494 "message": "Invalid MN SPDK_Controller\u001f" 00:17:08.494 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.494 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:08.495 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:08.495 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:08.495 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.495 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.495 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 4 == \- ]] 00:17:08.495 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '477.\'\''&RZmJh!;!7qx1B' 00:17:08.495 10:39:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '477.\'\''&RZmJh!;!7qx1B' nqn.2016-06.io.spdk:cnode21496 00:17:08.753 [2024-07-24 10:39:16.027852] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21496: invalid serial number '477.\'&RZmJh!;!7qx1B' 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:08.753 { 00:17:08.753 "nqn": "nqn.2016-06.io.spdk:cnode21496", 00:17:08.753 "serial_number": "477.\\'\''\u007f&RZmJh!;!7qx1B", 00:17:08.753 "method": "nvmf_create_subsystem", 00:17:08.753 "req_id": 1 00:17:08.753 } 00:17:08.753 Got JSON-RPC error response 00:17:08.753 response: 00:17:08.753 { 00:17:08.753 "code": -32602, 00:17:08.753 "message": "Invalid SN 477.\\'\''\u007f&RZmJh!;!7qx1B" 00:17:08.753 }' 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:08.753 { 00:17:08.753 "nqn": "nqn.2016-06.io.spdk:cnode21496", 00:17:08.753 "serial_number": "477.\\'\u007f&RZmJh!;!7qx1B", 00:17:08.753 "method": "nvmf_create_subsystem", 00:17:08.753 "req_id": 1 00:17:08.753 } 00:17:08.753 Got JSON-RPC error response 00:17:08.753 response: 00:17:08.753 { 00:17:08.753 "code": -32602, 00:17:08.753 "message": "Invalid SN 477.\\'\u007f&RZmJh!;!7qx1B" 00:17:08.753 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:08.753 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:08.754 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ D == \- ]] 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'D?m\WrJ5d~tK5)fS|'\''u;w>a0uw! z@2r2Ee"_Oiu"' 00:17:09.013 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'D?m\WrJ5d~tK5)fS|'\''u;w>a0uw! z@2r2Ee"_Oiu"' nqn.2016-06.io.spdk:cnode6137 00:17:09.271 [2024-07-24 10:39:16.477449] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6137: invalid model number 'D?m\WrJ5d~tK5)fS|'u;w>a0uw! z@2r2Ee"_Oiu"' 00:17:09.271 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:09.271 { 00:17:09.271 "nqn": "nqn.2016-06.io.spdk:cnode6137", 00:17:09.271 "model_number": "D?m\\WrJ5d~tK5)fS|'\''u;w>a0uw! z@2r2Ee\"_Oiu\"", 00:17:09.271 "method": "nvmf_create_subsystem", 00:17:09.271 "req_id": 1 00:17:09.271 } 00:17:09.271 Got JSON-RPC error response 00:17:09.271 response: 00:17:09.271 { 00:17:09.271 "code": -32602, 00:17:09.271 "message": "Invalid MN D?m\\WrJ5d~tK5)fS|'\''u;w>a0uw! z@2r2Ee\"_Oiu\"" 00:17:09.271 }' 00:17:09.271 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:09.271 { 00:17:09.271 "nqn": "nqn.2016-06.io.spdk:cnode6137", 00:17:09.271 "model_number": "D?m\\WrJ5d~tK5)fS|'u;w>a0uw! z@2r2Ee\"_Oiu\"", 00:17:09.271 "method": "nvmf_create_subsystem", 00:17:09.271 "req_id": 1 00:17:09.271 } 00:17:09.271 Got JSON-RPC error response 00:17:09.271 response: 00:17:09.271 { 00:17:09.271 "code": -32602, 00:17:09.271 "message": "Invalid MN D?m\\WrJ5d~tK5)fS|'u;w>a0uw! z@2r2Ee\"_Oiu\"" 00:17:09.271 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:09.271 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:17:09.271 [2024-07-24 10:39:16.674989] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e74140/0x1e78610) succeed. 00:17:09.271 [2024-07-24 10:39:16.684196] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e75730/0x1eb9ca0) succeed. 00:17:09.529 10:39:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:09.786 10:39:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:17:09.786 10:39:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:17:09.786 192.168.100.9' 00:17:09.786 10:39:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:09.786 10:39:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:17:09.787 10:39:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:17:09.787 [2024-07-24 10:39:17.181234] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:09.787 10:39:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:09.787 { 00:17:09.787 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:09.787 "listen_address": { 00:17:09.787 "trtype": "rdma", 00:17:09.787 "traddr": "192.168.100.8", 00:17:09.787 "trsvcid": "4421" 00:17:09.787 }, 00:17:09.787 "method": "nvmf_subsystem_remove_listener", 00:17:09.787 "req_id": 1 00:17:09.787 } 00:17:09.787 Got JSON-RPC error response 00:17:09.787 response: 00:17:09.787 { 00:17:09.787 "code": -32602, 00:17:09.787 "message": "Invalid parameters" 00:17:09.787 }' 00:17:09.787 10:39:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:09.787 { 00:17:09.787 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:09.787 "listen_address": { 00:17:09.787 "trtype": "rdma", 00:17:09.787 "traddr": "192.168.100.8", 00:17:09.787 "trsvcid": "4421" 00:17:09.787 }, 00:17:09.787 "method": "nvmf_subsystem_remove_listener", 00:17:09.787 "req_id": 1 00:17:09.787 } 00:17:09.787 Got JSON-RPC error response 00:17:09.787 response: 00:17:09.787 { 00:17:09.787 "code": -32602, 00:17:09.787 "message": "Invalid parameters" 00:17:09.787 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:09.787 10:39:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25638 -i 0 00:17:10.044 [2024-07-24 10:39:17.365863] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25638: invalid cntlid range [0-65519] 00:17:10.044 10:39:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:10.044 { 00:17:10.044 "nqn": "nqn.2016-06.io.spdk:cnode25638", 00:17:10.044 "min_cntlid": 0, 00:17:10.044 "method": "nvmf_create_subsystem", 00:17:10.044 "req_id": 1 00:17:10.044 } 00:17:10.044 Got JSON-RPC error response 00:17:10.044 response: 00:17:10.044 { 00:17:10.044 "code": -32602, 00:17:10.044 "message": "Invalid cntlid range [0-65519]" 00:17:10.044 }' 00:17:10.044 10:39:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:10.044 { 00:17:10.044 "nqn": "nqn.2016-06.io.spdk:cnode25638", 00:17:10.044 "min_cntlid": 0, 00:17:10.044 "method": "nvmf_create_subsystem", 00:17:10.044 "req_id": 1 00:17:10.044 } 00:17:10.044 Got JSON-RPC error response 00:17:10.044 response: 00:17:10.044 { 00:17:10.044 "code": -32602, 00:17:10.044 "message": "Invalid cntlid range [0-65519]" 00:17:10.044 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:10.044 10:39:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18503 -i 65520 00:17:10.301 [2024-07-24 10:39:17.546544] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18503: invalid cntlid range [65520-65519] 00:17:10.301 10:39:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:10.301 { 00:17:10.301 "nqn": "nqn.2016-06.io.spdk:cnode18503", 00:17:10.301 "min_cntlid": 65520, 00:17:10.301 "method": "nvmf_create_subsystem", 00:17:10.301 "req_id": 1 00:17:10.301 } 00:17:10.301 Got JSON-RPC error response 00:17:10.301 response: 00:17:10.301 { 00:17:10.301 "code": -32602, 00:17:10.301 "message": "Invalid cntlid range [65520-65519]" 00:17:10.301 }' 00:17:10.301 10:39:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:10.301 { 00:17:10.301 "nqn": "nqn.2016-06.io.spdk:cnode18503", 00:17:10.301 "min_cntlid": 65520, 00:17:10.301 "method": "nvmf_create_subsystem", 00:17:10.301 "req_id": 1 00:17:10.301 } 00:17:10.301 Got JSON-RPC error response 00:17:10.301 response: 00:17:10.301 { 00:17:10.301 "code": -32602, 00:17:10.301 "message": "Invalid cntlid range [65520-65519]" 00:17:10.301 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:10.301 10:39:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24992 -I 0 00:17:10.301 [2024-07-24 10:39:17.731242] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24992: invalid cntlid range [1-0] 00:17:10.558 10:39:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:10.558 { 00:17:10.558 "nqn": "nqn.2016-06.io.spdk:cnode24992", 00:17:10.558 "max_cntlid": 0, 00:17:10.558 "method": "nvmf_create_subsystem", 00:17:10.558 "req_id": 1 00:17:10.558 } 00:17:10.558 Got JSON-RPC error response 00:17:10.558 response: 00:17:10.558 { 00:17:10.558 "code": -32602, 00:17:10.558 "message": "Invalid cntlid range [1-0]" 00:17:10.558 }' 00:17:10.558 10:39:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:10.558 { 00:17:10.558 "nqn": "nqn.2016-06.io.spdk:cnode24992", 00:17:10.558 "max_cntlid": 0, 00:17:10.558 "method": "nvmf_create_subsystem", 00:17:10.558 "req_id": 1 00:17:10.558 } 00:17:10.558 Got JSON-RPC error response 00:17:10.558 response: 00:17:10.558 { 00:17:10.558 "code": -32602, 00:17:10.558 "message": "Invalid cntlid range [1-0]" 00:17:10.558 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:10.558 10:39:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4493 -I 65520 00:17:10.558 [2024-07-24 10:39:17.903871] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4493: invalid cntlid range [1-65520] 00:17:10.558 10:39:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:10.558 { 00:17:10.558 "nqn": "nqn.2016-06.io.spdk:cnode4493", 00:17:10.558 "max_cntlid": 65520, 00:17:10.558 "method": "nvmf_create_subsystem", 00:17:10.558 "req_id": 1 00:17:10.558 } 00:17:10.558 Got JSON-RPC error response 00:17:10.558 response: 00:17:10.558 { 00:17:10.558 "code": -32602, 00:17:10.558 "message": "Invalid cntlid range [1-65520]" 00:17:10.558 }' 00:17:10.558 10:39:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:10.558 { 00:17:10.558 "nqn": "nqn.2016-06.io.spdk:cnode4493", 00:17:10.558 "max_cntlid": 65520, 00:17:10.558 "method": "nvmf_create_subsystem", 00:17:10.558 "req_id": 1 00:17:10.558 } 00:17:10.558 Got JSON-RPC error response 00:17:10.558 response: 00:17:10.558 { 00:17:10.558 "code": -32602, 00:17:10.558 "message": "Invalid cntlid range [1-65520]" 00:17:10.558 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:10.558 10:39:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27697 -i 6 -I 5 00:17:10.816 [2024-07-24 10:39:18.072507] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27697: invalid cntlid range [6-5] 00:17:10.816 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:10.816 { 00:17:10.816 "nqn": "nqn.2016-06.io.spdk:cnode27697", 00:17:10.816 "min_cntlid": 6, 00:17:10.816 "max_cntlid": 5, 00:17:10.816 "method": "nvmf_create_subsystem", 00:17:10.816 "req_id": 1 00:17:10.816 } 00:17:10.816 Got JSON-RPC error response 00:17:10.816 response: 00:17:10.816 { 00:17:10.816 "code": -32602, 00:17:10.816 "message": "Invalid cntlid range [6-5]" 00:17:10.816 }' 00:17:10.816 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:10.816 { 00:17:10.816 "nqn": "nqn.2016-06.io.spdk:cnode27697", 00:17:10.816 "min_cntlid": 6, 00:17:10.816 "max_cntlid": 5, 00:17:10.816 "method": "nvmf_create_subsystem", 00:17:10.816 "req_id": 1 00:17:10.816 } 00:17:10.816 Got JSON-RPC error response 00:17:10.816 response: 00:17:10.816 { 00:17:10.816 "code": -32602, 00:17:10.816 "message": "Invalid cntlid range [6-5]" 00:17:10.816 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:10.816 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:10.816 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:10.816 { 00:17:10.816 "name": "foobar", 00:17:10.816 "method": "nvmf_delete_target", 00:17:10.816 "req_id": 1 00:17:10.816 } 00:17:10.816 Got JSON-RPC error response 00:17:10.816 response: 00:17:10.816 { 00:17:10.816 "code": -32602, 00:17:10.816 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:10.816 }' 00:17:10.816 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:10.816 { 00:17:10.816 "name": "foobar", 00:17:10.816 "method": "nvmf_delete_target", 00:17:10.816 "req_id": 1 00:17:10.816 } 00:17:10.816 Got JSON-RPC error response 00:17:10.816 response: 00:17:10.816 { 00:17:10.816 "code": -32602, 00:17:10.816 "message": "The specified target doesn't exist, cannot delete it." 00:17:10.816 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:10.816 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:10.816 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:10.816 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:10.816 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:17:10.816 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:10.816 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:10.816 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:17:10.816 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:10.816 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:10.816 rmmod nvme_rdma 00:17:10.816 rmmod nvme_fabrics 00:17:10.816 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:10.816 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:17:10.817 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:17:10.817 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2211566 ']' 00:17:10.817 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2211566 00:17:10.817 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 2211566 ']' 00:17:10.817 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 2211566 00:17:10.817 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:17:10.817 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:10.817 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2211566 00:17:11.074 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:11.074 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:11.074 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2211566' 00:17:11.074 killing process with pid 2211566 00:17:11.074 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 2211566 00:17:11.074 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 2211566 00:17:11.332 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:11.332 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:11.332 00:17:11.332 real 0m9.034s 00:17:11.332 user 0m17.511s 00:17:11.332 sys 0m4.944s 00:17:11.332 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:11.332 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:11.332 ************************************ 00:17:11.332 END TEST nvmf_invalid 00:17:11.332 ************************************ 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:11.333 ************************************ 00:17:11.333 START TEST nvmf_connect_stress 00:17:11.333 ************************************ 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:17:11.333 * Looking for test storage... 00:17:11.333 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:17:11.333 10:39:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.596 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:17:16.596 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:17:16.597 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:17:16.597 Found net devices under 0000:da:00.0: mlx_0_0 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:17:16.597 Found net devices under 0000:da:00.1: mlx_0_1 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:16.597 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:16.597 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:17:16.597 altname enp218s0f0np0 00:17:16.597 altname ens818f0np0 00:17:16.597 inet 192.168.100.8/24 scope global mlx_0_0 00:17:16.597 valid_lft forever preferred_lft forever 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:16.597 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:16.597 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:17:16.597 altname enp218s0f1np1 00:17:16.597 altname ens818f1np1 00:17:16.597 inet 192.168.100.9/24 scope global mlx_0_1 00:17:16.597 valid_lft forever preferred_lft forever 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:16.597 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:16.598 192.168.100.9' 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:16.598 192.168.100.9' 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:16.598 192.168.100.9' 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2215261 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2215261 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2215261 ']' 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.598 [2024-07-24 10:39:23.655221] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:17:16.598 [2024-07-24 10:39:23.655272] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.598 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.598 [2024-07-24 10:39:23.711441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:16.598 [2024-07-24 10:39:23.753245] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.598 [2024-07-24 10:39:23.753288] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.598 [2024-07-24 10:39:23.753294] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.598 [2024-07-24 10:39:23.753300] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.598 [2024-07-24 10:39:23.753305] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.598 [2024-07-24 10:39:23.753413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.598 [2024-07-24 10:39:23.753536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:16.598 [2024-07-24 10:39:23.753539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.598 10:39:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.598 [2024-07-24 10:39:23.909740] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x164ac90/0x164f140) succeed. 00:17:16.598 [2024-07-24 10:39:23.918763] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x164c1e0/0x16907d0) succeed. 00:17:16.598 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.598 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:16.598 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.598 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.598 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.598 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:16.598 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.598 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.598 [2024-07-24 10:39:24.029237] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:16.598 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.598 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:16.598 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.598 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.598 NULL1 00:17:16.598 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.598 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2215396 00:17:16.598 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:16.599 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:16.599 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:16.599 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:16.856 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.856 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.114 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.114 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:17.114 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.114 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.114 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.371 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.371 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:17.371 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.371 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.371 10:39:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.936 10:39:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.936 10:39:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:17.936 10:39:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.936 10:39:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.936 10:39:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.193 10:39:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.193 10:39:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:18.193 10:39:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.193 10:39:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.193 10:39:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.492 10:39:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.492 10:39:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:18.492 10:39:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.492 10:39:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.492 10:39:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.777 10:39:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.777 10:39:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:18.777 10:39:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.777 10:39:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.777 10:39:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.035 10:39:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.035 10:39:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:19.035 10:39:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.035 10:39:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.035 10:39:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.292 10:39:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.292 10:39:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:19.292 10:39:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.292 10:39:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.292 10:39:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.857 10:39:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.857 10:39:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:19.857 10:39:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.857 10:39:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.857 10:39:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.114 10:39:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.114 10:39:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:20.114 10:39:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.114 10:39:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.114 10:39:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.371 10:39:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.371 10:39:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:20.371 10:39:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.371 10:39:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.371 10:39:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.628 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.628 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:20.628 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.628 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.628 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.194 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.194 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:21.194 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.194 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.194 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.451 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.451 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:21.451 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.451 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.451 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.708 10:39:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.708 10:39:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:21.708 10:39:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.708 10:39:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.708 10:39:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.966 10:39:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.966 10:39:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:21.966 10:39:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.966 10:39:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.966 10:39:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.223 10:39:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.223 10:39:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:22.223 10:39:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.223 10:39:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.223 10:39:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.787 10:39:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.787 10:39:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:22.787 10:39:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.787 10:39:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.787 10:39:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.043 10:39:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.043 10:39:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:23.043 10:39:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.043 10:39:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.043 10:39:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.300 10:39:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.300 10:39:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:23.300 10:39:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.300 10:39:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.300 10:39:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.557 10:39:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.557 10:39:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:23.557 10:39:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.557 10:39:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.557 10:39:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.122 10:39:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.122 10:39:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:24.122 10:39:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.122 10:39:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.122 10:39:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.379 10:39:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.379 10:39:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:24.379 10:39:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.379 10:39:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.379 10:39:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.636 10:39:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.636 10:39:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:24.636 10:39:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.636 10:39:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.636 10:39:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.893 10:39:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.893 10:39:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:24.893 10:39:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.893 10:39:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.893 10:39:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.457 10:39:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.458 10:39:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:25.458 10:39:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.458 10:39:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.458 10:39:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.715 10:39:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.715 10:39:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:25.715 10:39:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.715 10:39:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.715 10:39:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.972 10:39:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.973 10:39:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:25.973 10:39:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.973 10:39:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.973 10:39:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.230 10:39:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.230 10:39:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:26.230 10:39:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.230 10:39:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.230 10:39:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.487 10:39:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.487 10:39:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:26.487 10:39:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.487 10:39:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.487 10:39:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.744 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2215396 00:17:27.003 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2215396) - No such process 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2215396 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:27.003 rmmod nvme_rdma 00:17:27.003 rmmod nvme_fabrics 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2215261 ']' 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2215261 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2215261 ']' 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2215261 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2215261 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2215261' 00:17:27.003 killing process with pid 2215261 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2215261 00:17:27.003 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2215261 00:17:27.261 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:27.261 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:27.261 00:17:27.261 real 0m15.979s 00:17:27.261 user 0m39.615s 00:17:27.261 sys 0m5.365s 00:17:27.261 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:27.261 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.261 ************************************ 00:17:27.261 END TEST nvmf_connect_stress 00:17:27.261 ************************************ 00:17:27.261 10:39:34 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:17:27.261 10:39:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:27.261 10:39:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:27.261 10:39:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:27.261 ************************************ 00:17:27.261 START TEST nvmf_fused_ordering 00:17:27.261 ************************************ 00:17:27.261 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:17:27.519 * Looking for test storage... 00:17:27.519 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.519 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:17:27.520 10:39:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:32.784 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:32.784 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:17:32.784 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:32.784 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:32.784 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:32.784 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:32.784 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:32.784 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:17:32.784 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:32.784 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:17:32.784 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:17:32.784 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:17:32.784 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:17:32.784 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:17:32.784 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:17:32.784 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:32.784 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:32.784 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:32.784 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:17:32.785 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:17:32.785 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:17:32.785 Found net devices under 0000:da:00.0: mlx_0_0 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:17:32.785 Found net devices under 0000:da:00.1: mlx_0_1 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:32.785 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:33.044 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:33.044 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:17:33.044 altname enp218s0f0np0 00:17:33.044 altname ens818f0np0 00:17:33.044 inet 192.168.100.8/24 scope global mlx_0_0 00:17:33.044 valid_lft forever preferred_lft forever 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:33.044 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:33.044 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:17:33.044 altname enp218s0f1np1 00:17:33.044 altname ens818f1np1 00:17:33.044 inet 192.168.100.9/24 scope global mlx_0_1 00:17:33.044 valid_lft forever preferred_lft forever 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:33.044 192.168.100.9' 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:33.044 192.168.100.9' 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:33.044 192.168.100.9' 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:33.044 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:33.045 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:33.045 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:33.045 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:33.045 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:33.045 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:33.045 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:33.045 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.045 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:33.045 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2220189 00:17:33.045 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2220189 00:17:33.045 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2220189 ']' 00:17:33.045 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.045 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:33.045 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.045 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:33.045 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.045 [2024-07-24 10:39:40.414061] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:17:33.045 [2024-07-24 10:39:40.414103] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.045 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.045 [2024-07-24 10:39:40.466577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.303 [2024-07-24 10:39:40.506434] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.303 [2024-07-24 10:39:40.506471] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.303 [2024-07-24 10:39:40.506479] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.303 [2024-07-24 10:39:40.506484] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.303 [2024-07-24 10:39:40.506489] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.303 [2024-07-24 10:39:40.506518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.303 [2024-07-24 10:39:40.649958] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfe05a0/0xfe4a50) succeed. 00:17:33.303 [2024-07-24 10:39:40.658991] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfe1a50/0x10260e0) succeed. 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.303 [2024-07-24 10:39:40.720483] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.303 NULL1 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.303 10:39:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:33.561 [2024-07-24 10:39:40.773192] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:17:33.561 [2024-07-24 10:39:40.773236] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220209 ] 00:17:33.561 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.561 Attached to nqn.2016-06.io.spdk:cnode1 00:17:33.561 Namespace ID: 1 size: 1GB 00:17:33.561 fused_ordering(0) 00:17:33.561 fused_ordering(1) 00:17:33.562 fused_ordering(2) 00:17:33.562 fused_ordering(3) 00:17:33.562 fused_ordering(4) 00:17:33.562 fused_ordering(5) 00:17:33.562 fused_ordering(6) 00:17:33.562 fused_ordering(7) 00:17:33.562 fused_ordering(8) 00:17:33.562 fused_ordering(9) 00:17:33.562 fused_ordering(10) 00:17:33.562 fused_ordering(11) 00:17:33.562 fused_ordering(12) 00:17:33.562 fused_ordering(13) 00:17:33.562 fused_ordering(14) 00:17:33.562 fused_ordering(15) 00:17:33.562 fused_ordering(16) 00:17:33.562 fused_ordering(17) 00:17:33.562 fused_ordering(18) 00:17:33.562 fused_ordering(19) 00:17:33.562 fused_ordering(20) 00:17:33.562 fused_ordering(21) 00:17:33.562 fused_ordering(22) 00:17:33.562 fused_ordering(23) 00:17:33.562 fused_ordering(24) 00:17:33.562 fused_ordering(25) 00:17:33.562 fused_ordering(26) 00:17:33.562 fused_ordering(27) 00:17:33.562 fused_ordering(28) 00:17:33.562 fused_ordering(29) 00:17:33.562 fused_ordering(30) 00:17:33.562 fused_ordering(31) 00:17:33.562 fused_ordering(32) 00:17:33.562 fused_ordering(33) 00:17:33.562 fused_ordering(34) 00:17:33.562 fused_ordering(35) 00:17:33.562 fused_ordering(36) 00:17:33.562 fused_ordering(37) 00:17:33.562 fused_ordering(38) 00:17:33.562 fused_ordering(39) 00:17:33.562 fused_ordering(40) 00:17:33.562 fused_ordering(41) 00:17:33.562 fused_ordering(42) 00:17:33.562 fused_ordering(43) 00:17:33.562 fused_ordering(44) 00:17:33.562 fused_ordering(45) 00:17:33.562 fused_ordering(46) 00:17:33.562 fused_ordering(47) 00:17:33.562 fused_ordering(48) 00:17:33.562 fused_ordering(49) 00:17:33.562 fused_ordering(50) 00:17:33.562 fused_ordering(51) 00:17:33.562 fused_ordering(52) 00:17:33.562 fused_ordering(53) 00:17:33.562 fused_ordering(54) 00:17:33.562 fused_ordering(55) 00:17:33.562 fused_ordering(56) 00:17:33.562 fused_ordering(57) 00:17:33.562 fused_ordering(58) 00:17:33.562 fused_ordering(59) 00:17:33.562 fused_ordering(60) 00:17:33.562 fused_ordering(61) 00:17:33.562 fused_ordering(62) 00:17:33.562 fused_ordering(63) 00:17:33.562 fused_ordering(64) 00:17:33.562 fused_ordering(65) 00:17:33.562 fused_ordering(66) 00:17:33.562 fused_ordering(67) 00:17:33.562 fused_ordering(68) 00:17:33.562 fused_ordering(69) 00:17:33.562 fused_ordering(70) 00:17:33.562 fused_ordering(71) 00:17:33.562 fused_ordering(72) 00:17:33.562 fused_ordering(73) 00:17:33.562 fused_ordering(74) 00:17:33.562 fused_ordering(75) 00:17:33.562 fused_ordering(76) 00:17:33.562 fused_ordering(77) 00:17:33.562 fused_ordering(78) 00:17:33.562 fused_ordering(79) 00:17:33.562 fused_ordering(80) 00:17:33.562 fused_ordering(81) 00:17:33.562 fused_ordering(82) 00:17:33.562 fused_ordering(83) 00:17:33.562 fused_ordering(84) 00:17:33.562 fused_ordering(85) 00:17:33.562 fused_ordering(86) 00:17:33.562 fused_ordering(87) 00:17:33.562 fused_ordering(88) 00:17:33.562 fused_ordering(89) 00:17:33.562 fused_ordering(90) 00:17:33.562 fused_ordering(91) 00:17:33.562 fused_ordering(92) 00:17:33.562 fused_ordering(93) 00:17:33.562 fused_ordering(94) 00:17:33.562 fused_ordering(95) 00:17:33.562 fused_ordering(96) 00:17:33.562 fused_ordering(97) 00:17:33.562 fused_ordering(98) 00:17:33.562 fused_ordering(99) 00:17:33.562 fused_ordering(100) 00:17:33.562 fused_ordering(101) 00:17:33.562 fused_ordering(102) 00:17:33.562 fused_ordering(103) 00:17:33.562 fused_ordering(104) 00:17:33.562 fused_ordering(105) 00:17:33.562 fused_ordering(106) 00:17:33.562 fused_ordering(107) 00:17:33.562 fused_ordering(108) 00:17:33.562 fused_ordering(109) 00:17:33.562 fused_ordering(110) 00:17:33.562 fused_ordering(111) 00:17:33.562 fused_ordering(112) 00:17:33.562 fused_ordering(113) 00:17:33.562 fused_ordering(114) 00:17:33.562 fused_ordering(115) 00:17:33.562 fused_ordering(116) 00:17:33.562 fused_ordering(117) 00:17:33.562 fused_ordering(118) 00:17:33.562 fused_ordering(119) 00:17:33.562 fused_ordering(120) 00:17:33.562 fused_ordering(121) 00:17:33.562 fused_ordering(122) 00:17:33.562 fused_ordering(123) 00:17:33.562 fused_ordering(124) 00:17:33.562 fused_ordering(125) 00:17:33.562 fused_ordering(126) 00:17:33.562 fused_ordering(127) 00:17:33.562 fused_ordering(128) 00:17:33.562 fused_ordering(129) 00:17:33.562 fused_ordering(130) 00:17:33.562 fused_ordering(131) 00:17:33.562 fused_ordering(132) 00:17:33.562 fused_ordering(133) 00:17:33.562 fused_ordering(134) 00:17:33.562 fused_ordering(135) 00:17:33.562 fused_ordering(136) 00:17:33.562 fused_ordering(137) 00:17:33.562 fused_ordering(138) 00:17:33.562 fused_ordering(139) 00:17:33.562 fused_ordering(140) 00:17:33.562 fused_ordering(141) 00:17:33.562 fused_ordering(142) 00:17:33.562 fused_ordering(143) 00:17:33.562 fused_ordering(144) 00:17:33.562 fused_ordering(145) 00:17:33.562 fused_ordering(146) 00:17:33.562 fused_ordering(147) 00:17:33.562 fused_ordering(148) 00:17:33.562 fused_ordering(149) 00:17:33.562 fused_ordering(150) 00:17:33.562 fused_ordering(151) 00:17:33.562 fused_ordering(152) 00:17:33.562 fused_ordering(153) 00:17:33.562 fused_ordering(154) 00:17:33.562 fused_ordering(155) 00:17:33.562 fused_ordering(156) 00:17:33.562 fused_ordering(157) 00:17:33.562 fused_ordering(158) 00:17:33.562 fused_ordering(159) 00:17:33.562 fused_ordering(160) 00:17:33.562 fused_ordering(161) 00:17:33.562 fused_ordering(162) 00:17:33.562 fused_ordering(163) 00:17:33.562 fused_ordering(164) 00:17:33.562 fused_ordering(165) 00:17:33.562 fused_ordering(166) 00:17:33.562 fused_ordering(167) 00:17:33.562 fused_ordering(168) 00:17:33.562 fused_ordering(169) 00:17:33.562 fused_ordering(170) 00:17:33.562 fused_ordering(171) 00:17:33.562 fused_ordering(172) 00:17:33.562 fused_ordering(173) 00:17:33.562 fused_ordering(174) 00:17:33.562 fused_ordering(175) 00:17:33.562 fused_ordering(176) 00:17:33.562 fused_ordering(177) 00:17:33.562 fused_ordering(178) 00:17:33.562 fused_ordering(179) 00:17:33.562 fused_ordering(180) 00:17:33.562 fused_ordering(181) 00:17:33.562 fused_ordering(182) 00:17:33.562 fused_ordering(183) 00:17:33.562 fused_ordering(184) 00:17:33.562 fused_ordering(185) 00:17:33.562 fused_ordering(186) 00:17:33.562 fused_ordering(187) 00:17:33.562 fused_ordering(188) 00:17:33.562 fused_ordering(189) 00:17:33.562 fused_ordering(190) 00:17:33.562 fused_ordering(191) 00:17:33.562 fused_ordering(192) 00:17:33.562 fused_ordering(193) 00:17:33.562 fused_ordering(194) 00:17:33.562 fused_ordering(195) 00:17:33.562 fused_ordering(196) 00:17:33.562 fused_ordering(197) 00:17:33.562 fused_ordering(198) 00:17:33.562 fused_ordering(199) 00:17:33.562 fused_ordering(200) 00:17:33.562 fused_ordering(201) 00:17:33.562 fused_ordering(202) 00:17:33.562 fused_ordering(203) 00:17:33.562 fused_ordering(204) 00:17:33.562 fused_ordering(205) 00:17:33.820 fused_ordering(206) 00:17:33.820 fused_ordering(207) 00:17:33.820 fused_ordering(208) 00:17:33.820 fused_ordering(209) 00:17:33.820 fused_ordering(210) 00:17:33.820 fused_ordering(211) 00:17:33.820 fused_ordering(212) 00:17:33.820 fused_ordering(213) 00:17:33.820 fused_ordering(214) 00:17:33.820 fused_ordering(215) 00:17:33.820 fused_ordering(216) 00:17:33.820 fused_ordering(217) 00:17:33.820 fused_ordering(218) 00:17:33.820 fused_ordering(219) 00:17:33.820 fused_ordering(220) 00:17:33.820 fused_ordering(221) 00:17:33.820 fused_ordering(222) 00:17:33.820 fused_ordering(223) 00:17:33.820 fused_ordering(224) 00:17:33.820 fused_ordering(225) 00:17:33.820 fused_ordering(226) 00:17:33.820 fused_ordering(227) 00:17:33.820 fused_ordering(228) 00:17:33.820 fused_ordering(229) 00:17:33.820 fused_ordering(230) 00:17:33.820 fused_ordering(231) 00:17:33.820 fused_ordering(232) 00:17:33.820 fused_ordering(233) 00:17:33.820 fused_ordering(234) 00:17:33.820 fused_ordering(235) 00:17:33.820 fused_ordering(236) 00:17:33.820 fused_ordering(237) 00:17:33.820 fused_ordering(238) 00:17:33.820 fused_ordering(239) 00:17:33.820 fused_ordering(240) 00:17:33.820 fused_ordering(241) 00:17:33.820 fused_ordering(242) 00:17:33.820 fused_ordering(243) 00:17:33.820 fused_ordering(244) 00:17:33.820 fused_ordering(245) 00:17:33.820 fused_ordering(246) 00:17:33.820 fused_ordering(247) 00:17:33.820 fused_ordering(248) 00:17:33.820 fused_ordering(249) 00:17:33.820 fused_ordering(250) 00:17:33.820 fused_ordering(251) 00:17:33.820 fused_ordering(252) 00:17:33.820 fused_ordering(253) 00:17:33.820 fused_ordering(254) 00:17:33.820 fused_ordering(255) 00:17:33.820 fused_ordering(256) 00:17:33.820 fused_ordering(257) 00:17:33.820 fused_ordering(258) 00:17:33.820 fused_ordering(259) 00:17:33.820 fused_ordering(260) 00:17:33.820 fused_ordering(261) 00:17:33.820 fused_ordering(262) 00:17:33.820 fused_ordering(263) 00:17:33.820 fused_ordering(264) 00:17:33.820 fused_ordering(265) 00:17:33.820 fused_ordering(266) 00:17:33.820 fused_ordering(267) 00:17:33.820 fused_ordering(268) 00:17:33.821 fused_ordering(269) 00:17:33.821 fused_ordering(270) 00:17:33.821 fused_ordering(271) 00:17:33.821 fused_ordering(272) 00:17:33.821 fused_ordering(273) 00:17:33.821 fused_ordering(274) 00:17:33.821 fused_ordering(275) 00:17:33.821 fused_ordering(276) 00:17:33.821 fused_ordering(277) 00:17:33.821 fused_ordering(278) 00:17:33.821 fused_ordering(279) 00:17:33.821 fused_ordering(280) 00:17:33.821 fused_ordering(281) 00:17:33.821 fused_ordering(282) 00:17:33.821 fused_ordering(283) 00:17:33.821 fused_ordering(284) 00:17:33.821 fused_ordering(285) 00:17:33.821 fused_ordering(286) 00:17:33.821 fused_ordering(287) 00:17:33.821 fused_ordering(288) 00:17:33.821 fused_ordering(289) 00:17:33.821 fused_ordering(290) 00:17:33.821 fused_ordering(291) 00:17:33.821 fused_ordering(292) 00:17:33.821 fused_ordering(293) 00:17:33.821 fused_ordering(294) 00:17:33.821 fused_ordering(295) 00:17:33.821 fused_ordering(296) 00:17:33.821 fused_ordering(297) 00:17:33.821 fused_ordering(298) 00:17:33.821 fused_ordering(299) 00:17:33.821 fused_ordering(300) 00:17:33.821 fused_ordering(301) 00:17:33.821 fused_ordering(302) 00:17:33.821 fused_ordering(303) 00:17:33.821 fused_ordering(304) 00:17:33.821 fused_ordering(305) 00:17:33.821 fused_ordering(306) 00:17:33.821 fused_ordering(307) 00:17:33.821 fused_ordering(308) 00:17:33.821 fused_ordering(309) 00:17:33.821 fused_ordering(310) 00:17:33.821 fused_ordering(311) 00:17:33.821 fused_ordering(312) 00:17:33.821 fused_ordering(313) 00:17:33.821 fused_ordering(314) 00:17:33.821 fused_ordering(315) 00:17:33.821 fused_ordering(316) 00:17:33.821 fused_ordering(317) 00:17:33.821 fused_ordering(318) 00:17:33.821 fused_ordering(319) 00:17:33.821 fused_ordering(320) 00:17:33.821 fused_ordering(321) 00:17:33.821 fused_ordering(322) 00:17:33.821 fused_ordering(323) 00:17:33.821 fused_ordering(324) 00:17:33.821 fused_ordering(325) 00:17:33.821 fused_ordering(326) 00:17:33.821 fused_ordering(327) 00:17:33.821 fused_ordering(328) 00:17:33.821 fused_ordering(329) 00:17:33.821 fused_ordering(330) 00:17:33.821 fused_ordering(331) 00:17:33.821 fused_ordering(332) 00:17:33.821 fused_ordering(333) 00:17:33.821 fused_ordering(334) 00:17:33.821 fused_ordering(335) 00:17:33.821 fused_ordering(336) 00:17:33.821 fused_ordering(337) 00:17:33.821 fused_ordering(338) 00:17:33.821 fused_ordering(339) 00:17:33.821 fused_ordering(340) 00:17:33.821 fused_ordering(341) 00:17:33.821 fused_ordering(342) 00:17:33.821 fused_ordering(343) 00:17:33.821 fused_ordering(344) 00:17:33.821 fused_ordering(345) 00:17:33.821 fused_ordering(346) 00:17:33.821 fused_ordering(347) 00:17:33.821 fused_ordering(348) 00:17:33.821 fused_ordering(349) 00:17:33.821 fused_ordering(350) 00:17:33.821 fused_ordering(351) 00:17:33.821 fused_ordering(352) 00:17:33.821 fused_ordering(353) 00:17:33.821 fused_ordering(354) 00:17:33.821 fused_ordering(355) 00:17:33.821 fused_ordering(356) 00:17:33.821 fused_ordering(357) 00:17:33.821 fused_ordering(358) 00:17:33.821 fused_ordering(359) 00:17:33.821 fused_ordering(360) 00:17:33.821 fused_ordering(361) 00:17:33.821 fused_ordering(362) 00:17:33.821 fused_ordering(363) 00:17:33.821 fused_ordering(364) 00:17:33.821 fused_ordering(365) 00:17:33.821 fused_ordering(366) 00:17:33.821 fused_ordering(367) 00:17:33.821 fused_ordering(368) 00:17:33.821 fused_ordering(369) 00:17:33.821 fused_ordering(370) 00:17:33.821 fused_ordering(371) 00:17:33.821 fused_ordering(372) 00:17:33.821 fused_ordering(373) 00:17:33.821 fused_ordering(374) 00:17:33.821 fused_ordering(375) 00:17:33.821 fused_ordering(376) 00:17:33.821 fused_ordering(377) 00:17:33.821 fused_ordering(378) 00:17:33.821 fused_ordering(379) 00:17:33.821 fused_ordering(380) 00:17:33.821 fused_ordering(381) 00:17:33.821 fused_ordering(382) 00:17:33.821 fused_ordering(383) 00:17:33.821 fused_ordering(384) 00:17:33.821 fused_ordering(385) 00:17:33.821 fused_ordering(386) 00:17:33.821 fused_ordering(387) 00:17:33.821 fused_ordering(388) 00:17:33.821 fused_ordering(389) 00:17:33.821 fused_ordering(390) 00:17:33.821 fused_ordering(391) 00:17:33.821 fused_ordering(392) 00:17:33.821 fused_ordering(393) 00:17:33.821 fused_ordering(394) 00:17:33.821 fused_ordering(395) 00:17:33.821 fused_ordering(396) 00:17:33.821 fused_ordering(397) 00:17:33.821 fused_ordering(398) 00:17:33.821 fused_ordering(399) 00:17:33.821 fused_ordering(400) 00:17:33.821 fused_ordering(401) 00:17:33.821 fused_ordering(402) 00:17:33.821 fused_ordering(403) 00:17:33.821 fused_ordering(404) 00:17:33.821 fused_ordering(405) 00:17:33.821 fused_ordering(406) 00:17:33.821 fused_ordering(407) 00:17:33.821 fused_ordering(408) 00:17:33.821 fused_ordering(409) 00:17:33.821 fused_ordering(410) 00:17:33.821 fused_ordering(411) 00:17:33.821 fused_ordering(412) 00:17:33.821 fused_ordering(413) 00:17:33.821 fused_ordering(414) 00:17:33.821 fused_ordering(415) 00:17:33.821 fused_ordering(416) 00:17:33.821 fused_ordering(417) 00:17:33.821 fused_ordering(418) 00:17:33.821 fused_ordering(419) 00:17:33.821 fused_ordering(420) 00:17:33.821 fused_ordering(421) 00:17:33.821 fused_ordering(422) 00:17:33.821 fused_ordering(423) 00:17:33.821 fused_ordering(424) 00:17:33.821 fused_ordering(425) 00:17:33.821 fused_ordering(426) 00:17:33.821 fused_ordering(427) 00:17:33.821 fused_ordering(428) 00:17:33.821 fused_ordering(429) 00:17:33.821 fused_ordering(430) 00:17:33.821 fused_ordering(431) 00:17:33.821 fused_ordering(432) 00:17:33.821 fused_ordering(433) 00:17:33.821 fused_ordering(434) 00:17:33.821 fused_ordering(435) 00:17:33.821 fused_ordering(436) 00:17:33.821 fused_ordering(437) 00:17:33.821 fused_ordering(438) 00:17:33.821 fused_ordering(439) 00:17:33.821 fused_ordering(440) 00:17:33.821 fused_ordering(441) 00:17:33.821 fused_ordering(442) 00:17:33.821 fused_ordering(443) 00:17:33.821 fused_ordering(444) 00:17:33.821 fused_ordering(445) 00:17:33.821 fused_ordering(446) 00:17:33.821 fused_ordering(447) 00:17:33.821 fused_ordering(448) 00:17:33.821 fused_ordering(449) 00:17:33.821 fused_ordering(450) 00:17:33.821 fused_ordering(451) 00:17:33.821 fused_ordering(452) 00:17:33.821 fused_ordering(453) 00:17:33.821 fused_ordering(454) 00:17:33.821 fused_ordering(455) 00:17:33.821 fused_ordering(456) 00:17:33.821 fused_ordering(457) 00:17:33.821 fused_ordering(458) 00:17:33.821 fused_ordering(459) 00:17:33.821 fused_ordering(460) 00:17:33.821 fused_ordering(461) 00:17:33.821 fused_ordering(462) 00:17:33.821 fused_ordering(463) 00:17:33.821 fused_ordering(464) 00:17:33.821 fused_ordering(465) 00:17:33.821 fused_ordering(466) 00:17:33.821 fused_ordering(467) 00:17:33.821 fused_ordering(468) 00:17:33.821 fused_ordering(469) 00:17:33.821 fused_ordering(470) 00:17:33.821 fused_ordering(471) 00:17:33.821 fused_ordering(472) 00:17:33.821 fused_ordering(473) 00:17:33.821 fused_ordering(474) 00:17:33.821 fused_ordering(475) 00:17:33.821 fused_ordering(476) 00:17:33.821 fused_ordering(477) 00:17:33.821 fused_ordering(478) 00:17:33.821 fused_ordering(479) 00:17:33.821 fused_ordering(480) 00:17:33.821 fused_ordering(481) 00:17:33.821 fused_ordering(482) 00:17:33.821 fused_ordering(483) 00:17:33.821 fused_ordering(484) 00:17:33.821 fused_ordering(485) 00:17:33.821 fused_ordering(486) 00:17:33.821 fused_ordering(487) 00:17:33.821 fused_ordering(488) 00:17:33.821 fused_ordering(489) 00:17:33.821 fused_ordering(490) 00:17:33.821 fused_ordering(491) 00:17:33.821 fused_ordering(492) 00:17:33.821 fused_ordering(493) 00:17:33.821 fused_ordering(494) 00:17:33.821 fused_ordering(495) 00:17:33.821 fused_ordering(496) 00:17:33.821 fused_ordering(497) 00:17:33.821 fused_ordering(498) 00:17:33.821 fused_ordering(499) 00:17:33.821 fused_ordering(500) 00:17:33.821 fused_ordering(501) 00:17:33.821 fused_ordering(502) 00:17:33.821 fused_ordering(503) 00:17:33.821 fused_ordering(504) 00:17:33.821 fused_ordering(505) 00:17:33.821 fused_ordering(506) 00:17:33.821 fused_ordering(507) 00:17:33.821 fused_ordering(508) 00:17:33.821 fused_ordering(509) 00:17:33.821 fused_ordering(510) 00:17:33.821 fused_ordering(511) 00:17:33.821 fused_ordering(512) 00:17:33.821 fused_ordering(513) 00:17:33.821 fused_ordering(514) 00:17:33.821 fused_ordering(515) 00:17:33.821 fused_ordering(516) 00:17:33.821 fused_ordering(517) 00:17:33.821 fused_ordering(518) 00:17:33.821 fused_ordering(519) 00:17:33.821 fused_ordering(520) 00:17:33.821 fused_ordering(521) 00:17:33.821 fused_ordering(522) 00:17:33.821 fused_ordering(523) 00:17:33.821 fused_ordering(524) 00:17:33.821 fused_ordering(525) 00:17:33.821 fused_ordering(526) 00:17:33.821 fused_ordering(527) 00:17:33.821 fused_ordering(528) 00:17:33.821 fused_ordering(529) 00:17:33.821 fused_ordering(530) 00:17:33.821 fused_ordering(531) 00:17:33.821 fused_ordering(532) 00:17:33.821 fused_ordering(533) 00:17:33.821 fused_ordering(534) 00:17:33.821 fused_ordering(535) 00:17:33.821 fused_ordering(536) 00:17:33.821 fused_ordering(537) 00:17:33.821 fused_ordering(538) 00:17:33.821 fused_ordering(539) 00:17:33.821 fused_ordering(540) 00:17:33.821 fused_ordering(541) 00:17:33.821 fused_ordering(542) 00:17:33.821 fused_ordering(543) 00:17:33.821 fused_ordering(544) 00:17:33.821 fused_ordering(545) 00:17:33.821 fused_ordering(546) 00:17:33.821 fused_ordering(547) 00:17:33.821 fused_ordering(548) 00:17:33.821 fused_ordering(549) 00:17:33.821 fused_ordering(550) 00:17:33.821 fused_ordering(551) 00:17:33.821 fused_ordering(552) 00:17:33.821 fused_ordering(553) 00:17:33.821 fused_ordering(554) 00:17:33.822 fused_ordering(555) 00:17:33.822 fused_ordering(556) 00:17:33.822 fused_ordering(557) 00:17:33.822 fused_ordering(558) 00:17:33.822 fused_ordering(559) 00:17:33.822 fused_ordering(560) 00:17:33.822 fused_ordering(561) 00:17:33.822 fused_ordering(562) 00:17:33.822 fused_ordering(563) 00:17:33.822 fused_ordering(564) 00:17:33.822 fused_ordering(565) 00:17:33.822 fused_ordering(566) 00:17:33.822 fused_ordering(567) 00:17:33.822 fused_ordering(568) 00:17:33.822 fused_ordering(569) 00:17:33.822 fused_ordering(570) 00:17:33.822 fused_ordering(571) 00:17:33.822 fused_ordering(572) 00:17:33.822 fused_ordering(573) 00:17:33.822 fused_ordering(574) 00:17:33.822 fused_ordering(575) 00:17:33.822 fused_ordering(576) 00:17:33.822 fused_ordering(577) 00:17:33.822 fused_ordering(578) 00:17:33.822 fused_ordering(579) 00:17:33.822 fused_ordering(580) 00:17:33.822 fused_ordering(581) 00:17:33.822 fused_ordering(582) 00:17:33.822 fused_ordering(583) 00:17:33.822 fused_ordering(584) 00:17:33.822 fused_ordering(585) 00:17:33.822 fused_ordering(586) 00:17:33.822 fused_ordering(587) 00:17:33.822 fused_ordering(588) 00:17:33.822 fused_ordering(589) 00:17:33.822 fused_ordering(590) 00:17:33.822 fused_ordering(591) 00:17:33.822 fused_ordering(592) 00:17:33.822 fused_ordering(593) 00:17:33.822 fused_ordering(594) 00:17:33.822 fused_ordering(595) 00:17:33.822 fused_ordering(596) 00:17:33.822 fused_ordering(597) 00:17:33.822 fused_ordering(598) 00:17:33.822 fused_ordering(599) 00:17:33.822 fused_ordering(600) 00:17:33.822 fused_ordering(601) 00:17:33.822 fused_ordering(602) 00:17:33.822 fused_ordering(603) 00:17:33.822 fused_ordering(604) 00:17:33.822 fused_ordering(605) 00:17:33.822 fused_ordering(606) 00:17:33.822 fused_ordering(607) 00:17:33.822 fused_ordering(608) 00:17:33.822 fused_ordering(609) 00:17:33.822 fused_ordering(610) 00:17:33.822 fused_ordering(611) 00:17:33.822 fused_ordering(612) 00:17:33.822 fused_ordering(613) 00:17:33.822 fused_ordering(614) 00:17:33.822 fused_ordering(615) 00:17:33.822 fused_ordering(616) 00:17:33.822 fused_ordering(617) 00:17:33.822 fused_ordering(618) 00:17:33.822 fused_ordering(619) 00:17:33.822 fused_ordering(620) 00:17:33.822 fused_ordering(621) 00:17:33.822 fused_ordering(622) 00:17:33.822 fused_ordering(623) 00:17:33.822 fused_ordering(624) 00:17:33.822 fused_ordering(625) 00:17:33.822 fused_ordering(626) 00:17:33.822 fused_ordering(627) 00:17:33.822 fused_ordering(628) 00:17:33.822 fused_ordering(629) 00:17:33.822 fused_ordering(630) 00:17:33.822 fused_ordering(631) 00:17:33.822 fused_ordering(632) 00:17:33.822 fused_ordering(633) 00:17:33.822 fused_ordering(634) 00:17:33.822 fused_ordering(635) 00:17:33.822 fused_ordering(636) 00:17:33.822 fused_ordering(637) 00:17:33.822 fused_ordering(638) 00:17:33.822 fused_ordering(639) 00:17:33.822 fused_ordering(640) 00:17:33.822 fused_ordering(641) 00:17:33.822 fused_ordering(642) 00:17:33.822 fused_ordering(643) 00:17:33.822 fused_ordering(644) 00:17:33.822 fused_ordering(645) 00:17:33.822 fused_ordering(646) 00:17:33.822 fused_ordering(647) 00:17:33.822 fused_ordering(648) 00:17:33.822 fused_ordering(649) 00:17:33.822 fused_ordering(650) 00:17:33.822 fused_ordering(651) 00:17:33.822 fused_ordering(652) 00:17:33.822 fused_ordering(653) 00:17:33.822 fused_ordering(654) 00:17:33.822 fused_ordering(655) 00:17:33.822 fused_ordering(656) 00:17:33.822 fused_ordering(657) 00:17:33.822 fused_ordering(658) 00:17:33.822 fused_ordering(659) 00:17:33.822 fused_ordering(660) 00:17:33.822 fused_ordering(661) 00:17:33.822 fused_ordering(662) 00:17:33.822 fused_ordering(663) 00:17:33.822 fused_ordering(664) 00:17:33.822 fused_ordering(665) 00:17:33.822 fused_ordering(666) 00:17:33.822 fused_ordering(667) 00:17:33.822 fused_ordering(668) 00:17:33.822 fused_ordering(669) 00:17:33.822 fused_ordering(670) 00:17:33.822 fused_ordering(671) 00:17:33.822 fused_ordering(672) 00:17:33.822 fused_ordering(673) 00:17:33.822 fused_ordering(674) 00:17:33.822 fused_ordering(675) 00:17:33.822 fused_ordering(676) 00:17:33.822 fused_ordering(677) 00:17:33.822 fused_ordering(678) 00:17:33.822 fused_ordering(679) 00:17:33.822 fused_ordering(680) 00:17:33.822 fused_ordering(681) 00:17:33.822 fused_ordering(682) 00:17:33.822 fused_ordering(683) 00:17:33.822 fused_ordering(684) 00:17:33.822 fused_ordering(685) 00:17:33.822 fused_ordering(686) 00:17:33.822 fused_ordering(687) 00:17:33.822 fused_ordering(688) 00:17:33.822 fused_ordering(689) 00:17:33.822 fused_ordering(690) 00:17:33.822 fused_ordering(691) 00:17:33.822 fused_ordering(692) 00:17:33.822 fused_ordering(693) 00:17:33.822 fused_ordering(694) 00:17:33.822 fused_ordering(695) 00:17:33.822 fused_ordering(696) 00:17:33.822 fused_ordering(697) 00:17:33.822 fused_ordering(698) 00:17:33.822 fused_ordering(699) 00:17:33.822 fused_ordering(700) 00:17:33.822 fused_ordering(701) 00:17:33.822 fused_ordering(702) 00:17:33.822 fused_ordering(703) 00:17:33.822 fused_ordering(704) 00:17:33.822 fused_ordering(705) 00:17:33.822 fused_ordering(706) 00:17:33.822 fused_ordering(707) 00:17:33.822 fused_ordering(708) 00:17:33.822 fused_ordering(709) 00:17:33.822 fused_ordering(710) 00:17:33.822 fused_ordering(711) 00:17:33.822 fused_ordering(712) 00:17:33.822 fused_ordering(713) 00:17:33.822 fused_ordering(714) 00:17:33.822 fused_ordering(715) 00:17:33.822 fused_ordering(716) 00:17:33.822 fused_ordering(717) 00:17:33.822 fused_ordering(718) 00:17:33.822 fused_ordering(719) 00:17:33.822 fused_ordering(720) 00:17:33.822 fused_ordering(721) 00:17:33.822 fused_ordering(722) 00:17:33.822 fused_ordering(723) 00:17:33.822 fused_ordering(724) 00:17:33.822 fused_ordering(725) 00:17:33.822 fused_ordering(726) 00:17:33.822 fused_ordering(727) 00:17:33.822 fused_ordering(728) 00:17:33.822 fused_ordering(729) 00:17:33.822 fused_ordering(730) 00:17:33.822 fused_ordering(731) 00:17:33.822 fused_ordering(732) 00:17:33.822 fused_ordering(733) 00:17:33.822 fused_ordering(734) 00:17:33.822 fused_ordering(735) 00:17:33.822 fused_ordering(736) 00:17:33.822 fused_ordering(737) 00:17:33.822 fused_ordering(738) 00:17:33.822 fused_ordering(739) 00:17:33.822 fused_ordering(740) 00:17:33.822 fused_ordering(741) 00:17:33.822 fused_ordering(742) 00:17:33.822 fused_ordering(743) 00:17:33.822 fused_ordering(744) 00:17:33.822 fused_ordering(745) 00:17:33.822 fused_ordering(746) 00:17:33.822 fused_ordering(747) 00:17:33.822 fused_ordering(748) 00:17:33.822 fused_ordering(749) 00:17:33.822 fused_ordering(750) 00:17:33.822 fused_ordering(751) 00:17:33.822 fused_ordering(752) 00:17:33.822 fused_ordering(753) 00:17:33.822 fused_ordering(754) 00:17:33.822 fused_ordering(755) 00:17:33.822 fused_ordering(756) 00:17:33.822 fused_ordering(757) 00:17:33.822 fused_ordering(758) 00:17:33.822 fused_ordering(759) 00:17:33.822 fused_ordering(760) 00:17:33.822 fused_ordering(761) 00:17:33.822 fused_ordering(762) 00:17:33.822 fused_ordering(763) 00:17:33.822 fused_ordering(764) 00:17:33.822 fused_ordering(765) 00:17:33.822 fused_ordering(766) 00:17:33.822 fused_ordering(767) 00:17:33.822 fused_ordering(768) 00:17:33.822 fused_ordering(769) 00:17:33.822 fused_ordering(770) 00:17:33.822 fused_ordering(771) 00:17:33.822 fused_ordering(772) 00:17:33.822 fused_ordering(773) 00:17:33.822 fused_ordering(774) 00:17:33.822 fused_ordering(775) 00:17:33.822 fused_ordering(776) 00:17:33.822 fused_ordering(777) 00:17:33.822 fused_ordering(778) 00:17:33.822 fused_ordering(779) 00:17:33.822 fused_ordering(780) 00:17:33.822 fused_ordering(781) 00:17:33.822 fused_ordering(782) 00:17:33.822 fused_ordering(783) 00:17:33.822 fused_ordering(784) 00:17:33.822 fused_ordering(785) 00:17:33.822 fused_ordering(786) 00:17:33.822 fused_ordering(787) 00:17:33.822 fused_ordering(788) 00:17:33.822 fused_ordering(789) 00:17:33.822 fused_ordering(790) 00:17:33.822 fused_ordering(791) 00:17:33.822 fused_ordering(792) 00:17:33.822 fused_ordering(793) 00:17:33.822 fused_ordering(794) 00:17:33.822 fused_ordering(795) 00:17:33.822 fused_ordering(796) 00:17:33.822 fused_ordering(797) 00:17:33.822 fused_ordering(798) 00:17:33.822 fused_ordering(799) 00:17:33.822 fused_ordering(800) 00:17:33.822 fused_ordering(801) 00:17:33.822 fused_ordering(802) 00:17:33.822 fused_ordering(803) 00:17:33.822 fused_ordering(804) 00:17:33.822 fused_ordering(805) 00:17:33.822 fused_ordering(806) 00:17:33.822 fused_ordering(807) 00:17:33.822 fused_ordering(808) 00:17:33.822 fused_ordering(809) 00:17:33.822 fused_ordering(810) 00:17:33.822 fused_ordering(811) 00:17:33.822 fused_ordering(812) 00:17:33.822 fused_ordering(813) 00:17:33.822 fused_ordering(814) 00:17:33.822 fused_ordering(815) 00:17:33.822 fused_ordering(816) 00:17:33.822 fused_ordering(817) 00:17:33.822 fused_ordering(818) 00:17:33.822 fused_ordering(819) 00:17:33.822 fused_ordering(820) 00:17:34.080 fused_ordering(821) 00:17:34.080 fused_ordering(822) 00:17:34.080 fused_ordering(823) 00:17:34.080 fused_ordering(824) 00:17:34.080 fused_ordering(825) 00:17:34.080 fused_ordering(826) 00:17:34.080 fused_ordering(827) 00:17:34.080 fused_ordering(828) 00:17:34.080 fused_ordering(829) 00:17:34.080 fused_ordering(830) 00:17:34.080 fused_ordering(831) 00:17:34.080 fused_ordering(832) 00:17:34.080 fused_ordering(833) 00:17:34.080 fused_ordering(834) 00:17:34.080 fused_ordering(835) 00:17:34.080 fused_ordering(836) 00:17:34.080 fused_ordering(837) 00:17:34.080 fused_ordering(838) 00:17:34.080 fused_ordering(839) 00:17:34.080 fused_ordering(840) 00:17:34.080 fused_ordering(841) 00:17:34.080 fused_ordering(842) 00:17:34.080 fused_ordering(843) 00:17:34.080 fused_ordering(844) 00:17:34.080 fused_ordering(845) 00:17:34.080 fused_ordering(846) 00:17:34.080 fused_ordering(847) 00:17:34.080 fused_ordering(848) 00:17:34.080 fused_ordering(849) 00:17:34.080 fused_ordering(850) 00:17:34.080 fused_ordering(851) 00:17:34.080 fused_ordering(852) 00:17:34.080 fused_ordering(853) 00:17:34.080 fused_ordering(854) 00:17:34.080 fused_ordering(855) 00:17:34.080 fused_ordering(856) 00:17:34.080 fused_ordering(857) 00:17:34.080 fused_ordering(858) 00:17:34.080 fused_ordering(859) 00:17:34.080 fused_ordering(860) 00:17:34.080 fused_ordering(861) 00:17:34.080 fused_ordering(862) 00:17:34.080 fused_ordering(863) 00:17:34.080 fused_ordering(864) 00:17:34.080 fused_ordering(865) 00:17:34.080 fused_ordering(866) 00:17:34.080 fused_ordering(867) 00:17:34.080 fused_ordering(868) 00:17:34.080 fused_ordering(869) 00:17:34.080 fused_ordering(870) 00:17:34.080 fused_ordering(871) 00:17:34.080 fused_ordering(872) 00:17:34.080 fused_ordering(873) 00:17:34.080 fused_ordering(874) 00:17:34.080 fused_ordering(875) 00:17:34.080 fused_ordering(876) 00:17:34.080 fused_ordering(877) 00:17:34.080 fused_ordering(878) 00:17:34.080 fused_ordering(879) 00:17:34.080 fused_ordering(880) 00:17:34.080 fused_ordering(881) 00:17:34.080 fused_ordering(882) 00:17:34.080 fused_ordering(883) 00:17:34.080 fused_ordering(884) 00:17:34.080 fused_ordering(885) 00:17:34.080 fused_ordering(886) 00:17:34.080 fused_ordering(887) 00:17:34.080 fused_ordering(888) 00:17:34.080 fused_ordering(889) 00:17:34.080 fused_ordering(890) 00:17:34.080 fused_ordering(891) 00:17:34.080 fused_ordering(892) 00:17:34.080 fused_ordering(893) 00:17:34.080 fused_ordering(894) 00:17:34.080 fused_ordering(895) 00:17:34.080 fused_ordering(896) 00:17:34.080 fused_ordering(897) 00:17:34.080 fused_ordering(898) 00:17:34.080 fused_ordering(899) 00:17:34.080 fused_ordering(900) 00:17:34.080 fused_ordering(901) 00:17:34.080 fused_ordering(902) 00:17:34.080 fused_ordering(903) 00:17:34.080 fused_ordering(904) 00:17:34.080 fused_ordering(905) 00:17:34.080 fused_ordering(906) 00:17:34.080 fused_ordering(907) 00:17:34.080 fused_ordering(908) 00:17:34.080 fused_ordering(909) 00:17:34.080 fused_ordering(910) 00:17:34.080 fused_ordering(911) 00:17:34.080 fused_ordering(912) 00:17:34.080 fused_ordering(913) 00:17:34.080 fused_ordering(914) 00:17:34.080 fused_ordering(915) 00:17:34.080 fused_ordering(916) 00:17:34.080 fused_ordering(917) 00:17:34.080 fused_ordering(918) 00:17:34.080 fused_ordering(919) 00:17:34.080 fused_ordering(920) 00:17:34.080 fused_ordering(921) 00:17:34.080 fused_ordering(922) 00:17:34.080 fused_ordering(923) 00:17:34.080 fused_ordering(924) 00:17:34.080 fused_ordering(925) 00:17:34.080 fused_ordering(926) 00:17:34.080 fused_ordering(927) 00:17:34.080 fused_ordering(928) 00:17:34.080 fused_ordering(929) 00:17:34.080 fused_ordering(930) 00:17:34.080 fused_ordering(931) 00:17:34.080 fused_ordering(932) 00:17:34.081 fused_ordering(933) 00:17:34.081 fused_ordering(934) 00:17:34.081 fused_ordering(935) 00:17:34.081 fused_ordering(936) 00:17:34.081 fused_ordering(937) 00:17:34.081 fused_ordering(938) 00:17:34.081 fused_ordering(939) 00:17:34.081 fused_ordering(940) 00:17:34.081 fused_ordering(941) 00:17:34.081 fused_ordering(942) 00:17:34.081 fused_ordering(943) 00:17:34.081 fused_ordering(944) 00:17:34.081 fused_ordering(945) 00:17:34.081 fused_ordering(946) 00:17:34.081 fused_ordering(947) 00:17:34.081 fused_ordering(948) 00:17:34.081 fused_ordering(949) 00:17:34.081 fused_ordering(950) 00:17:34.081 fused_ordering(951) 00:17:34.081 fused_ordering(952) 00:17:34.081 fused_ordering(953) 00:17:34.081 fused_ordering(954) 00:17:34.081 fused_ordering(955) 00:17:34.081 fused_ordering(956) 00:17:34.081 fused_ordering(957) 00:17:34.081 fused_ordering(958) 00:17:34.081 fused_ordering(959) 00:17:34.081 fused_ordering(960) 00:17:34.081 fused_ordering(961) 00:17:34.081 fused_ordering(962) 00:17:34.081 fused_ordering(963) 00:17:34.081 fused_ordering(964) 00:17:34.081 fused_ordering(965) 00:17:34.081 fused_ordering(966) 00:17:34.081 fused_ordering(967) 00:17:34.081 fused_ordering(968) 00:17:34.081 fused_ordering(969) 00:17:34.081 fused_ordering(970) 00:17:34.081 fused_ordering(971) 00:17:34.081 fused_ordering(972) 00:17:34.081 fused_ordering(973) 00:17:34.081 fused_ordering(974) 00:17:34.081 fused_ordering(975) 00:17:34.081 fused_ordering(976) 00:17:34.081 fused_ordering(977) 00:17:34.081 fused_ordering(978) 00:17:34.081 fused_ordering(979) 00:17:34.081 fused_ordering(980) 00:17:34.081 fused_ordering(981) 00:17:34.081 fused_ordering(982) 00:17:34.081 fused_ordering(983) 00:17:34.081 fused_ordering(984) 00:17:34.081 fused_ordering(985) 00:17:34.081 fused_ordering(986) 00:17:34.081 fused_ordering(987) 00:17:34.081 fused_ordering(988) 00:17:34.081 fused_ordering(989) 00:17:34.081 fused_ordering(990) 00:17:34.081 fused_ordering(991) 00:17:34.081 fused_ordering(992) 00:17:34.081 fused_ordering(993) 00:17:34.081 fused_ordering(994) 00:17:34.081 fused_ordering(995) 00:17:34.081 fused_ordering(996) 00:17:34.081 fused_ordering(997) 00:17:34.081 fused_ordering(998) 00:17:34.081 fused_ordering(999) 00:17:34.081 fused_ordering(1000) 00:17:34.081 fused_ordering(1001) 00:17:34.081 fused_ordering(1002) 00:17:34.081 fused_ordering(1003) 00:17:34.081 fused_ordering(1004) 00:17:34.081 fused_ordering(1005) 00:17:34.081 fused_ordering(1006) 00:17:34.081 fused_ordering(1007) 00:17:34.081 fused_ordering(1008) 00:17:34.081 fused_ordering(1009) 00:17:34.081 fused_ordering(1010) 00:17:34.081 fused_ordering(1011) 00:17:34.081 fused_ordering(1012) 00:17:34.081 fused_ordering(1013) 00:17:34.081 fused_ordering(1014) 00:17:34.081 fused_ordering(1015) 00:17:34.081 fused_ordering(1016) 00:17:34.081 fused_ordering(1017) 00:17:34.081 fused_ordering(1018) 00:17:34.081 fused_ordering(1019) 00:17:34.081 fused_ordering(1020) 00:17:34.081 fused_ordering(1021) 00:17:34.081 fused_ordering(1022) 00:17:34.081 fused_ordering(1023) 00:17:34.081 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:34.081 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:34.081 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:34.081 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:17:34.081 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:34.081 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:34.081 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:17:34.081 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:34.081 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:34.081 rmmod nvme_rdma 00:17:34.081 rmmod nvme_fabrics 00:17:34.081 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:34.081 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:17:34.081 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:17:34.081 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2220189 ']' 00:17:34.081 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2220189 00:17:34.081 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2220189 ']' 00:17:34.081 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2220189 00:17:34.081 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:17:34.081 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:34.081 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2220189 00:17:34.339 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:34.339 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:34.339 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2220189' 00:17:34.339 killing process with pid 2220189 00:17:34.339 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2220189 00:17:34.339 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2220189 00:17:34.339 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:34.339 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:34.339 00:17:34.339 real 0m7.097s 00:17:34.339 user 0m3.532s 00:17:34.339 sys 0m4.627s 00:17:34.339 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:34.339 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:34.339 ************************************ 00:17:34.339 END TEST nvmf_fused_ordering 00:17:34.339 ************************************ 00:17:34.339 10:39:41 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:17:34.339 10:39:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:34.339 10:39:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:34.339 10:39:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:34.597 ************************************ 00:17:34.597 START TEST nvmf_ns_masking 00:17:34.597 ************************************ 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:17:34.597 * Looking for test storage... 00:17:34.597 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=43f84505-96f3-42c4-890e-1ddec69595ac 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=16cbb958-7796-4cc2-8fb5-ad33335077e7 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a81a72c2-ea10-4503-b30d-7b06d263f810 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:34.597 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:34.598 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:34.598 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.598 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.598 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.598 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:34.598 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:34.598 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:17:34.598 10:39:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:17:39.861 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:17:39.861 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:17:39.861 Found net devices under 0000:da:00.0: mlx_0_0 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.861 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:17:39.862 Found net devices under 0000:da:00.1: mlx_0_1 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:39.862 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:39.862 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:17:39.862 altname enp218s0f0np0 00:17:39.862 altname ens818f0np0 00:17:39.862 inet 192.168.100.8/24 scope global mlx_0_0 00:17:39.862 valid_lft forever preferred_lft forever 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:39.862 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:39.862 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:17:39.862 altname enp218s0f1np1 00:17:39.862 altname ens818f1np1 00:17:39.862 inet 192.168.100.9/24 scope global mlx_0_1 00:17:39.862 valid_lft forever preferred_lft forever 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:39.862 192.168.100.9' 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:39.862 192.168.100.9' 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:17:39.862 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:39.863 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:17:39.863 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:39.863 192.168.100.9' 00:17:39.863 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:17:39.863 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:39.863 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:39.863 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:39.863 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:39.863 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:39.863 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:39.863 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:39.863 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:39.863 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:39.863 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:39.863 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2223495 00:17:39.863 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2223495 00:17:39.863 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:39.863 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2223495 ']' 00:17:39.863 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.863 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:39.863 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.863 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:39.863 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:40.120 [2024-07-24 10:39:47.318898] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:17:40.120 [2024-07-24 10:39:47.318942] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.120 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.120 [2024-07-24 10:39:47.373804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.120 [2024-07-24 10:39:47.413423] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.120 [2024-07-24 10:39:47.413462] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.120 [2024-07-24 10:39:47.413468] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.120 [2024-07-24 10:39:47.413474] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.120 [2024-07-24 10:39:47.413479] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.120 [2024-07-24 10:39:47.413502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.120 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:40.120 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:40.120 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:40.120 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:40.120 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:40.120 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.120 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:40.378 [2024-07-24 10:39:47.696792] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x190d240/0x19116f0) succeed. 00:17:40.378 [2024-07-24 10:39:47.705212] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x190e6f0/0x1952d80) succeed. 00:17:40.378 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:40.378 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:40.378 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:40.635 Malloc1 00:17:40.635 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:40.892 Malloc2 00:17:40.892 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:40.892 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:41.150 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:41.408 [2024-07-24 10:39:48.614092] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:41.408 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:41.408 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a81a72c2-ea10-4503-b30d-7b06d263f810 -a 192.168.100.8 -s 4420 -i 4 00:17:41.666 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:41.666 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:41.666 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:41.666 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:41.666 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:43.565 10:39:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:43.565 10:39:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:43.565 10:39:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:43.565 10:39:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:43.565 10:39:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:43.565 10:39:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:43.565 10:39:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:43.565 10:39:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:43.565 10:39:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:43.565 10:39:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:43.565 10:39:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:43.565 10:39:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.565 10:39:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:43.565 [ 0]:0x1 00:17:43.565 10:39:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:43.565 10:39:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.824 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=237399d118d9419e9795ff3b23aaadf1 00:17:43.824 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 237399d118d9419e9795ff3b23aaadf1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.824 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:43.824 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:43.824 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.824 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:43.824 [ 0]:0x1 00:17:43.824 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:43.824 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.824 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=237399d118d9419e9795ff3b23aaadf1 00:17:43.824 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 237399d118d9419e9795ff3b23aaadf1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.824 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:43.824 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.824 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:43.824 [ 1]:0x2 00:17:43.824 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:43.824 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:44.082 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d2e1c9f6bed34dd2adf7841764b21142 00:17:44.082 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d2e1c9f6bed34dd2adf7841764b21142 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:44.082 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:44.082 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:44.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.340 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:44.598 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:44.598 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:44.598 10:39:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a81a72c2-ea10-4503-b30d-7b06d263f810 -a 192.168.100.8 -s 4420 -i 4 00:17:44.856 10:39:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:44.856 10:39:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:44.856 10:39:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:44.856 10:39:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:17:44.856 10:39:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:17:44.856 10:39:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:47.437 [ 0]:0x2 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d2e1c9f6bed34dd2adf7841764b21142 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d2e1c9f6bed34dd2adf7841764b21142 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:47.437 [ 0]:0x1 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=237399d118d9419e9795ff3b23aaadf1 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 237399d118d9419e9795ff3b23aaadf1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:47.437 [ 1]:0x2 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d2e1c9f6bed34dd2adf7841764b21142 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d2e1c9f6bed34dd2adf7841764b21142 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:47.437 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:47.695 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:47.695 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:47.695 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:47.695 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:47.695 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:47.695 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:47.695 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:47.695 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:47.695 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:47.695 [ 0]:0x2 00:17:47.695 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:47.695 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:47.695 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d2e1c9f6bed34dd2adf7841764b21142 00:17:47.695 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d2e1c9f6bed34dd2adf7841764b21142 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:47.695 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:47.695 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:47.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.953 10:39:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:48.210 10:39:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:48.210 10:39:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a81a72c2-ea10-4503-b30d-7b06d263f810 -a 192.168.100.8 -s 4420 -i 4 00:17:48.467 10:39:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:48.467 10:39:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:48.467 10:39:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:48.467 10:39:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:48.467 10:39:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:48.467 10:39:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:50.362 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:50.362 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:50.362 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:50.362 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:50.362 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:50.362 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:50.362 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:50.362 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:50.620 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:50.620 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:50.620 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:50.620 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:50.620 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:50.620 [ 0]:0x1 00:17:50.620 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.620 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:50.620 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=237399d118d9419e9795ff3b23aaadf1 00:17:50.620 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 237399d118d9419e9795ff3b23aaadf1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.620 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:50.620 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:50.620 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:50.620 [ 1]:0x2 00:17:50.620 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:50.620 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.620 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d2e1c9f6bed34dd2adf7841764b21142 00:17:50.620 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d2e1c9f6bed34dd2adf7841764b21142 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.620 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:50.876 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:50.876 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:50.876 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:50.876 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:50.876 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.876 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:50.876 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.876 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:50.876 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:50.876 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:50.876 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:50.876 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.876 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:50.876 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.876 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:50.876 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:50.876 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:50.876 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:50.876 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:50.877 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:50.877 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:50.877 [ 0]:0x2 00:17:50.877 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:50.877 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.877 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d2e1c9f6bed34dd2adf7841764b21142 00:17:50.877 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d2e1c9f6bed34dd2adf7841764b21142 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.877 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:50.877 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:50.877 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:50.877 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:50.877 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.877 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:50.877 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.877 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:50.877 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.877 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:50.877 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:17:50.877 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:51.133 [2024-07-24 10:39:58.363067] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:51.133 request: 00:17:51.133 { 00:17:51.133 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:51.133 "nsid": 2, 00:17:51.133 "host": "nqn.2016-06.io.spdk:host1", 00:17:51.133 "method": "nvmf_ns_remove_host", 00:17:51.133 "req_id": 1 00:17:51.133 } 00:17:51.133 Got JSON-RPC error response 00:17:51.133 response: 00:17:51.133 { 00:17:51.133 "code": -32602, 00:17:51.133 "message": "Invalid parameters" 00:17:51.133 } 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:51.134 [ 0]:0x2 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d2e1c9f6bed34dd2adf7841764b21142 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d2e1c9f6bed34dd2adf7841764b21142 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:51.134 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:51.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.390 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2225499 00:17:51.390 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:51.391 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.391 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2225499 /var/tmp/host.sock 00:17:51.391 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2225499 ']' 00:17:51.391 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:51.391 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:51.391 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:51.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:51.391 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:51.391 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:51.391 [2024-07-24 10:39:58.844934] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:17:51.391 [2024-07-24 10:39:58.844977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225499 ] 00:17:51.648 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.648 [2024-07-24 10:39:58.899541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.648 [2024-07-24 10:39:58.939298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.905 10:39:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:51.905 10:39:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:51.905 10:39:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:51.905 10:39:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:52.162 10:39:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 43f84505-96f3-42c4-890e-1ddec69595ac 00:17:52.162 10:39:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:17:52.162 10:39:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 43F8450596F342C4890E1DDEC69595AC -i 00:17:52.419 10:39:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 16cbb958-7796-4cc2-8fb5-ad33335077e7 00:17:52.419 10:39:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:17:52.420 10:39:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 16CBB95877964CC28FB5AD33335077E7 -i 00:17:52.420 10:39:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:52.677 10:39:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:52.934 10:40:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:52.934 10:40:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:53.192 nvme0n1 00:17:53.192 10:40:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:53.192 10:40:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:53.192 nvme1n2 00:17:53.449 10:40:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:53.449 10:40:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:53.449 10:40:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:53.449 10:40:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:53.449 10:40:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:53.449 10:40:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:53.449 10:40:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:53.449 10:40:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:53.449 10:40:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:53.706 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 43f84505-96f3-42c4-890e-1ddec69595ac == \4\3\f\8\4\5\0\5\-\9\6\f\3\-\4\2\c\4\-\8\9\0\e\-\1\d\d\e\c\6\9\5\9\5\a\c ]] 00:17:53.706 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:53.706 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:53.706 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:53.964 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 16cbb958-7796-4cc2-8fb5-ad33335077e7 == \1\6\c\b\b\9\5\8\-\7\7\9\6\-\4\c\c\2\-\8\f\b\5\-\a\d\3\3\3\3\5\0\7\7\e\7 ]] 00:17:53.964 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2225499 00:17:53.964 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2225499 ']' 00:17:53.964 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2225499 00:17:53.964 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:53.964 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:53.964 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2225499 00:17:53.964 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:53.964 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:53.964 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2225499' 00:17:53.964 killing process with pid 2225499 00:17:53.964 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2225499 00:17:53.964 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2225499 00:17:54.221 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:54.479 rmmod nvme_rdma 00:17:54.479 rmmod nvme_fabrics 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2223495 ']' 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2223495 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2223495 ']' 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2223495 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2223495 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2223495' 00:17:54.479 killing process with pid 2223495 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2223495 00:17:54.479 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2223495 00:17:54.737 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:54.737 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:54.737 00:17:54.737 real 0m20.231s 00:17:54.737 user 0m23.257s 00:17:54.737 sys 0m5.830s 00:17:54.737 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:54.737 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:54.737 ************************************ 00:17:54.737 END TEST nvmf_ns_masking 00:17:54.737 ************************************ 00:17:54.737 10:40:02 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:54.737 10:40:02 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:17:54.737 10:40:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:54.737 10:40:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:54.737 10:40:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:54.737 ************************************ 00:17:54.737 START TEST nvmf_nvme_cli 00:17:54.737 ************************************ 00:17:54.737 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:17:54.995 * Looking for test storage... 00:17:54.995 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:54.995 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.996 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:54.996 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.996 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:54.996 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:54.996 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:17:54.996 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:18:00.266 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:18:00.266 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:18:00.266 Found net devices under 0000:da:00.0: mlx_0_0 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:18:00.266 Found net devices under 0000:da:00.1: mlx_0_1 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:18:00.266 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:00.267 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:00.267 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:18:00.267 altname enp218s0f0np0 00:18:00.267 altname ens818f0np0 00:18:00.267 inet 192.168.100.8/24 scope global mlx_0_0 00:18:00.267 valid_lft forever preferred_lft forever 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:00.267 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:00.267 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:18:00.267 altname enp218s0f1np1 00:18:00.267 altname ens818f1np1 00:18:00.267 inet 192.168.100.9/24 scope global mlx_0_1 00:18:00.267 valid_lft forever preferred_lft forever 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:18:00.267 192.168.100.9' 00:18:00.267 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:18:00.267 192.168.100.9' 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:18:00.268 192.168.100.9' 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2229049 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2229049 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 2229049 ']' 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.268 [2024-07-24 10:40:07.442035] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:18:00.268 [2024-07-24 10:40:07.442081] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.268 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.268 [2024-07-24 10:40:07.496407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:00.268 [2024-07-24 10:40:07.538233] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.268 [2024-07-24 10:40:07.538275] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.268 [2024-07-24 10:40:07.538282] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.268 [2024-07-24 10:40:07.538287] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.268 [2024-07-24 10:40:07.538292] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.268 [2024-07-24 10:40:07.538361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.268 [2024-07-24 10:40:07.538453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.268 [2024-07-24 10:40:07.538545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:00.268 [2024-07-24 10:40:07.538547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.268 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.268 [2024-07-24 10:40:07.715791] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa9e6a0/0xaa2b70) succeed. 00:18:00.527 [2024-07-24 10:40:07.724877] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa9fc90/0xae4200) succeed. 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.527 Malloc0 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.527 Malloc1 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.527 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.528 [2024-07-24 10:40:07.915393] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:00.528 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.528 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:00.528 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.528 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.528 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.528 10:40:07 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:18:00.787 00:18:00.787 Discovery Log Number of Records 2, Generation counter 2 00:18:00.787 =====Discovery Log Entry 0====== 00:18:00.787 trtype: rdma 00:18:00.787 adrfam: ipv4 00:18:00.787 subtype: current discovery subsystem 00:18:00.787 treq: not required 00:18:00.787 portid: 0 00:18:00.787 trsvcid: 4420 00:18:00.787 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:00.787 traddr: 192.168.100.8 00:18:00.787 eflags: explicit discovery connections, duplicate discovery information 00:18:00.787 rdma_prtype: not specified 00:18:00.787 rdma_qptype: connected 00:18:00.787 rdma_cms: rdma-cm 00:18:00.787 rdma_pkey: 0x0000 00:18:00.787 =====Discovery Log Entry 1====== 00:18:00.787 trtype: rdma 00:18:00.787 adrfam: ipv4 00:18:00.787 subtype: nvme subsystem 00:18:00.787 treq: not required 00:18:00.787 portid: 0 00:18:00.787 trsvcid: 4420 00:18:00.787 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:00.787 traddr: 192.168.100.8 00:18:00.787 eflags: none 00:18:00.787 rdma_prtype: not specified 00:18:00.787 rdma_qptype: connected 00:18:00.787 rdma_cms: rdma-cm 00:18:00.787 rdma_pkey: 0x0000 00:18:00.787 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:00.787 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:00.787 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:00.787 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:00.787 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:00.787 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:00.787 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:00.787 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:00.787 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:00.787 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:00.787 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:01.721 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:01.721 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:18:01.721 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:01.721 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:01.721 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:01.721 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:18:03.627 10:40:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:18:03.627 /dev/nvme0n1 ]] 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:03.627 10:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:04.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:04.999 rmmod nvme_rdma 00:18:04.999 rmmod nvme_fabrics 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2229049 ']' 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2229049 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 2229049 ']' 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 2229049 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2229049 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2229049' 00:18:04.999 killing process with pid 2229049 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 2229049 00:18:04.999 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 2229049 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:18:05.258 00:18:05.258 real 0m10.394s 00:18:05.258 user 0m20.891s 00:18:05.258 sys 0m4.435s 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:05.258 ************************************ 00:18:05.258 END TEST nvmf_nvme_cli 00:18:05.258 ************************************ 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:05.258 ************************************ 00:18:05.258 START TEST nvmf_auth_target 00:18:05.258 ************************************ 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:18:05.258 * Looking for test storage... 00:18:05.258 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:05.258 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:05.259 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:18:10.522 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:18:10.522 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:10.522 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:18:10.523 Found net devices under 0000:da:00.0: mlx_0_0 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:18:10.523 Found net devices under 0000:da:00.1: mlx_0_1 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:10.523 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:10.781 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:10.781 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:10.781 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.781 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:10.781 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:10.781 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:18:10.781 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:10.781 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.782 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:10.782 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.782 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:10.782 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:10.782 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:18:10.782 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:10.782 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:10.782 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:10.782 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:10.782 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:10.782 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:10.782 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:10.782 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:10.782 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:10.782 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:10.782 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:18:10.782 altname enp218s0f0np0 00:18:10.782 altname ens818f0np0 00:18:10.782 inet 192.168.100.8/24 scope global mlx_0_0 00:18:10.782 valid_lft forever preferred_lft forever 00:18:10.782 10:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:10.782 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:10.782 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:18:10.782 altname enp218s0f1np1 00:18:10.782 altname ens818f1np1 00:18:10.782 inet 192.168.100.9/24 scope global mlx_0_1 00:18:10.782 valid_lft forever preferred_lft forever 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:18:10.782 192.168.100.9' 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:18:10.782 192.168.100.9' 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:18:10.782 192.168.100.9' 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2233058 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2233058 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2233058 ']' 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:10.782 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2233077 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=90881cb243bd082fb3f0d3c433cdc9a81c87ffb7e45727c8 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.dDI 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 90881cb243bd082fb3f0d3c433cdc9a81c87ffb7e45727c8 0 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 90881cb243bd082fb3f0d3c433cdc9a81c87ffb7e45727c8 0 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=90881cb243bd082fb3f0d3c433cdc9a81c87ffb7e45727c8 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.dDI 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.dDI 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.dDI 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:11.041 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:11.042 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=29d349553b3c40869356396a5386a109d56216c10c94886a0bf8b54c3e135005 00:18:11.042 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:11.042 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Gad 00:18:11.042 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 29d349553b3c40869356396a5386a109d56216c10c94886a0bf8b54c3e135005 3 00:18:11.042 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 29d349553b3c40869356396a5386a109d56216c10c94886a0bf8b54c3e135005 3 00:18:11.042 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.042 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:11.042 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=29d349553b3c40869356396a5386a109d56216c10c94886a0bf8b54c3e135005 00:18:11.042 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:11.042 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:11.042 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Gad 00:18:11.042 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Gad 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Gad 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=21ec44bc353a6670d359cc7eee742cb0 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.MiM 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 21ec44bc353a6670d359cc7eee742cb0 1 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 21ec44bc353a6670d359cc7eee742cb0 1 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=21ec44bc353a6670d359cc7eee742cb0 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.MiM 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.MiM 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.MiM 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=51f1764bc96a8abee75b3419c8a35a29203b389b1f709670 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ik9 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 51f1764bc96a8abee75b3419c8a35a29203b389b1f709670 2 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 51f1764bc96a8abee75b3419c8a35a29203b389b1f709670 2 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:11.300 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=51f1764bc96a8abee75b3419c8a35a29203b389b1f709670 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ik9 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ik9 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.ik9 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6632c53125159cd0b4207d53968f95ac429c3f9f3e490d60 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.FSj 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6632c53125159cd0b4207d53968f95ac429c3f9f3e490d60 2 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6632c53125159cd0b4207d53968f95ac429c3f9f3e490d60 2 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6632c53125159cd0b4207d53968f95ac429c3f9f3e490d60 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.FSj 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.FSj 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.FSj 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3e63df8708318b76d12b93feb0635796 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.T0t 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3e63df8708318b76d12b93feb0635796 1 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3e63df8708318b76d12b93feb0635796 1 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3e63df8708318b76d12b93feb0635796 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.T0t 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.T0t 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.T0t 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8a57144fa74eb715935c174a6dc91bff11018e931598afbae61091a140bd47bb 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Sf3 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8a57144fa74eb715935c174a6dc91bff11018e931598afbae61091a140bd47bb 3 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8a57144fa74eb715935c174a6dc91bff11018e931598afbae61091a140bd47bb 3 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8a57144fa74eb715935c174a6dc91bff11018e931598afbae61091a140bd47bb 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:11.301 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:11.559 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Sf3 00:18:11.559 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Sf3 00:18:11.559 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Sf3 00:18:11.559 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:11.559 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2233058 00:18:11.559 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2233058 ']' 00:18:11.559 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.559 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:11.559 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.559 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:11.559 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.559 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:11.559 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:11.559 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2233077 /var/tmp/host.sock 00:18:11.559 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2233077 ']' 00:18:11.559 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:11.559 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:11.559 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:11.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:11.559 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:11.559 10:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.817 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:11.817 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:11.817 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:11.817 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.817 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.817 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.817 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:11.817 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.dDI 00:18:11.817 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.817 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.817 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.817 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.dDI 00:18:11.817 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.dDI 00:18:12.074 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Gad ]] 00:18:12.074 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Gad 00:18:12.074 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.074 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.074 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.074 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Gad 00:18:12.074 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Gad 00:18:12.332 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:12.332 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.MiM 00:18:12.332 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.332 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.332 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.332 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.MiM 00:18:12.332 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.MiM 00:18:12.332 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.ik9 ]] 00:18:12.332 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ik9 00:18:12.332 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.332 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.332 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.332 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ik9 00:18:12.332 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ik9 00:18:12.590 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:12.590 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.FSj 00:18:12.590 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.590 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.590 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.590 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.FSj 00:18:12.590 10:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.FSj 00:18:12.847 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.T0t ]] 00:18:12.847 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.T0t 00:18:12.847 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.847 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.847 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.847 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.T0t 00:18:12.847 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.T0t 00:18:13.105 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:13.105 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Sf3 00:18:13.105 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.105 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.105 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.105 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Sf3 00:18:13.105 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Sf3 00:18:13.105 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:13.105 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:13.105 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:13.105 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.105 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:13.105 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:13.363 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:13.363 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.363 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.363 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:13.363 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:13.363 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.363 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.363 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.363 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.363 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.363 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.363 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.621 00:18:13.621 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.621 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.621 10:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.879 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.879 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.879 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.879 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.879 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.879 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.879 { 00:18:13.879 "cntlid": 1, 00:18:13.879 "qid": 0, 00:18:13.879 "state": "enabled", 00:18:13.879 "thread": "nvmf_tgt_poll_group_000", 00:18:13.879 "listen_address": { 00:18:13.879 "trtype": "RDMA", 00:18:13.879 "adrfam": "IPv4", 00:18:13.879 "traddr": "192.168.100.8", 00:18:13.879 "trsvcid": "4420" 00:18:13.879 }, 00:18:13.879 "peer_address": { 00:18:13.879 "trtype": "RDMA", 00:18:13.879 "adrfam": "IPv4", 00:18:13.879 "traddr": "192.168.100.8", 00:18:13.879 "trsvcid": "41474" 00:18:13.879 }, 00:18:13.879 "auth": { 00:18:13.879 "state": "completed", 00:18:13.879 "digest": "sha256", 00:18:13.879 "dhgroup": "null" 00:18:13.879 } 00:18:13.879 } 00:18:13.879 ]' 00:18:13.879 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.879 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.879 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.879 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:13.879 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.879 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.879 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.879 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.137 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTA4ODFjYjI0M2JkMDgyZmIzZjBkM2M0MzNjZGM5YTgxYzg3ZmZiN2U0NTcyN2M4RxA3hg==: --dhchap-ctrl-secret DHHC-1:03:MjlkMzQ5NTUzYjNjNDA4NjkzNTYzOTZhNTM4NmExMDlkNTYyMTZjMTBjOTQ4ODZhMGJmOGI1NGMzZTEzNTAwNROJz80=: 00:18:14.759 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.759 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:14.759 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.759 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.759 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.759 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.759 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:14.759 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:15.017 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:15.017 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.017 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:15.017 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:15.017 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:15.017 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.017 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.017 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.017 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.017 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.017 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.017 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.274 00:18:15.274 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.274 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.274 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.532 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.532 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.532 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.532 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.532 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.532 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.532 { 00:18:15.532 "cntlid": 3, 00:18:15.532 "qid": 0, 00:18:15.532 "state": "enabled", 00:18:15.532 "thread": "nvmf_tgt_poll_group_000", 00:18:15.532 "listen_address": { 00:18:15.532 "trtype": "RDMA", 00:18:15.532 "adrfam": "IPv4", 00:18:15.532 "traddr": "192.168.100.8", 00:18:15.532 "trsvcid": "4420" 00:18:15.532 }, 00:18:15.532 "peer_address": { 00:18:15.532 "trtype": "RDMA", 00:18:15.532 "adrfam": "IPv4", 00:18:15.532 "traddr": "192.168.100.8", 00:18:15.533 "trsvcid": "55473" 00:18:15.533 }, 00:18:15.533 "auth": { 00:18:15.533 "state": "completed", 00:18:15.533 "digest": "sha256", 00:18:15.533 "dhgroup": "null" 00:18:15.533 } 00:18:15.533 } 00:18:15.533 ]' 00:18:15.533 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.533 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.533 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.533 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:15.533 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.533 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.533 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.533 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.790 10:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjFlYzQ0YmMzNTNhNjY3MGQzNTljYzdlZWU3NDJjYjD+i/g4: --dhchap-ctrl-secret DHHC-1:02:NTFmMTc2NGJjOTZhOGFiZWU3NWIzNDE5YzhhMzVhMjkyMDNiMzg5YjFmNzA5NjcwMvoMkA==: 00:18:16.367 10:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.625 10:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:16.625 10:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.625 10:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.625 10:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.625 10:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.625 10:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:16.625 10:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:16.625 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:16.625 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.625 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:16.625 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:16.625 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:16.625 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.625 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.625 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.625 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.625 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.625 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.625 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.882 00:18:16.882 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.882 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.882 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.140 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.140 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.140 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.140 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.140 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.140 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.140 { 00:18:17.140 "cntlid": 5, 00:18:17.140 "qid": 0, 00:18:17.140 "state": "enabled", 00:18:17.140 "thread": "nvmf_tgt_poll_group_000", 00:18:17.140 "listen_address": { 00:18:17.140 "trtype": "RDMA", 00:18:17.140 "adrfam": "IPv4", 00:18:17.140 "traddr": "192.168.100.8", 00:18:17.140 "trsvcid": "4420" 00:18:17.140 }, 00:18:17.140 "peer_address": { 00:18:17.140 "trtype": "RDMA", 00:18:17.140 "adrfam": "IPv4", 00:18:17.140 "traddr": "192.168.100.8", 00:18:17.140 "trsvcid": "36158" 00:18:17.140 }, 00:18:17.140 "auth": { 00:18:17.140 "state": "completed", 00:18:17.140 "digest": "sha256", 00:18:17.140 "dhgroup": "null" 00:18:17.140 } 00:18:17.140 } 00:18:17.140 ]' 00:18:17.140 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.140 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.140 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.140 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:17.140 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.140 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.140 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.140 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.397 10:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjYzMmM1MzEyNTE1OWNkMGI0MjA3ZDUzOTY4Zjk1YWM0MjljM2Y5ZjNlNDkwZDYwK2jCRA==: --dhchap-ctrl-secret DHHC-1:01:M2U2M2RmODcwODMxOGI3NmQxMmI5M2ZlYjA2MzU3OTZxXvhE: 00:18:17.962 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.221 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:18.222 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.222 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.222 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.222 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.222 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:18.222 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:18.481 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:18.481 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.481 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:18.481 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:18.481 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:18.481 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.481 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:18:18.481 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.481 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.481 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.481 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:18.481 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:18.481 00:18:18.739 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.739 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.739 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.739 10:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.739 10:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.739 10:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.739 10:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.739 10:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.739 10:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.739 { 00:18:18.739 "cntlid": 7, 00:18:18.739 "qid": 0, 00:18:18.739 "state": "enabled", 00:18:18.739 "thread": "nvmf_tgt_poll_group_000", 00:18:18.739 "listen_address": { 00:18:18.739 "trtype": "RDMA", 00:18:18.739 "adrfam": "IPv4", 00:18:18.739 "traddr": "192.168.100.8", 00:18:18.739 "trsvcid": "4420" 00:18:18.739 }, 00:18:18.739 "peer_address": { 00:18:18.739 "trtype": "RDMA", 00:18:18.739 "adrfam": "IPv4", 00:18:18.739 "traddr": "192.168.100.8", 00:18:18.739 "trsvcid": "56559" 00:18:18.739 }, 00:18:18.739 "auth": { 00:18:18.739 "state": "completed", 00:18:18.739 "digest": "sha256", 00:18:18.739 "dhgroup": "null" 00:18:18.739 } 00:18:18.739 } 00:18:18.739 ]' 00:18:18.739 10:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.739 10:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.739 10:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.997 10:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:18.997 10:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.997 10:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.997 10:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.997 10:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.997 10:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE1NzE0NGZhNzRlYjcxNTkzNWMxNzRhNmRjOTFiZmYxMTAxOGU5MzE1OThhZmJhZTYxMDkxYTE0MGJkNDdiYlq6dDU=: 00:18:19.931 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.931 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:19.931 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.931 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.931 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.931 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:19.931 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.931 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:19.931 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:19.931 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:19.931 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.931 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:19.931 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:19.931 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:19.931 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.931 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.931 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.931 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.931 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.931 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.931 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.190 00:18:20.190 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.190 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.190 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.447 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.447 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.447 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.447 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.447 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.447 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.447 { 00:18:20.447 "cntlid": 9, 00:18:20.447 "qid": 0, 00:18:20.447 "state": "enabled", 00:18:20.447 "thread": "nvmf_tgt_poll_group_000", 00:18:20.447 "listen_address": { 00:18:20.447 "trtype": "RDMA", 00:18:20.447 "adrfam": "IPv4", 00:18:20.447 "traddr": "192.168.100.8", 00:18:20.447 "trsvcid": "4420" 00:18:20.447 }, 00:18:20.447 "peer_address": { 00:18:20.447 "trtype": "RDMA", 00:18:20.447 "adrfam": "IPv4", 00:18:20.447 "traddr": "192.168.100.8", 00:18:20.447 "trsvcid": "54239" 00:18:20.447 }, 00:18:20.447 "auth": { 00:18:20.447 "state": "completed", 00:18:20.447 "digest": "sha256", 00:18:20.447 "dhgroup": "ffdhe2048" 00:18:20.447 } 00:18:20.447 } 00:18:20.447 ]' 00:18:20.447 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.448 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.448 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.448 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:20.448 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.705 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.705 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.705 10:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.705 10:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTA4ODFjYjI0M2JkMDgyZmIzZjBkM2M0MzNjZGM5YTgxYzg3ZmZiN2U0NTcyN2M4RxA3hg==: --dhchap-ctrl-secret DHHC-1:03:MjlkMzQ5NTUzYjNjNDA4NjkzNTYzOTZhNTM4NmExMDlkNTYyMTZjMTBjOTQ4ODZhMGJmOGI1NGMzZTEzNTAwNROJz80=: 00:18:21.269 10:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.526 10:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:21.526 10:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.526 10:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.526 10:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.526 10:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.526 10:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:21.526 10:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:21.783 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:21.783 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.783 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:21.783 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:21.783 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:21.783 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.783 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.783 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.783 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.783 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.783 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.783 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.040 00:18:22.040 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.040 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.040 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.040 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.040 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.040 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.040 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.040 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.040 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.040 { 00:18:22.040 "cntlid": 11, 00:18:22.040 "qid": 0, 00:18:22.040 "state": "enabled", 00:18:22.040 "thread": "nvmf_tgt_poll_group_000", 00:18:22.040 "listen_address": { 00:18:22.040 "trtype": "RDMA", 00:18:22.040 "adrfam": "IPv4", 00:18:22.041 "traddr": "192.168.100.8", 00:18:22.041 "trsvcid": "4420" 00:18:22.041 }, 00:18:22.041 "peer_address": { 00:18:22.041 "trtype": "RDMA", 00:18:22.041 "adrfam": "IPv4", 00:18:22.041 "traddr": "192.168.100.8", 00:18:22.041 "trsvcid": "40674" 00:18:22.041 }, 00:18:22.041 "auth": { 00:18:22.041 "state": "completed", 00:18:22.041 "digest": "sha256", 00:18:22.041 "dhgroup": "ffdhe2048" 00:18:22.041 } 00:18:22.041 } 00:18:22.041 ]' 00:18:22.041 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.041 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.041 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.298 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:22.298 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.298 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.298 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.298 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.298 10:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjFlYzQ0YmMzNTNhNjY3MGQzNTljYzdlZWU3NDJjYjD+i/g4: --dhchap-ctrl-secret DHHC-1:02:NTFmMTc2NGJjOTZhOGFiZWU3NWIzNDE5YzhhMzVhMjkyMDNiMzg5YjFmNzA5NjcwMvoMkA==: 00:18:23.231 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.231 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:23.231 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.231 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.231 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.231 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.231 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:23.231 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:23.231 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:23.231 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.231 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:23.231 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:23.231 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:23.231 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.231 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.231 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.232 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.489 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.489 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.489 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.489 00:18:23.490 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.490 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.490 10:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.748 10:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.748 10:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.748 10:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.748 10:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.748 10:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.748 10:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.748 { 00:18:23.748 "cntlid": 13, 00:18:23.748 "qid": 0, 00:18:23.748 "state": "enabled", 00:18:23.748 "thread": "nvmf_tgt_poll_group_000", 00:18:23.748 "listen_address": { 00:18:23.748 "trtype": "RDMA", 00:18:23.748 "adrfam": "IPv4", 00:18:23.748 "traddr": "192.168.100.8", 00:18:23.748 "trsvcid": "4420" 00:18:23.748 }, 00:18:23.748 "peer_address": { 00:18:23.748 "trtype": "RDMA", 00:18:23.748 "adrfam": "IPv4", 00:18:23.748 "traddr": "192.168.100.8", 00:18:23.748 "trsvcid": "42510" 00:18:23.748 }, 00:18:23.748 "auth": { 00:18:23.748 "state": "completed", 00:18:23.748 "digest": "sha256", 00:18:23.748 "dhgroup": "ffdhe2048" 00:18:23.748 } 00:18:23.748 } 00:18:23.748 ]' 00:18:23.748 10:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.748 10:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:23.748 10:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.748 10:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:24.005 10:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.005 10:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.005 10:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.005 10:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.005 10:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjYzMmM1MzEyNTE1OWNkMGI0MjA3ZDUzOTY4Zjk1YWM0MjljM2Y5ZjNlNDkwZDYwK2jCRA==: --dhchap-ctrl-secret DHHC-1:01:M2U2M2RmODcwODMxOGI3NmQxMmI5M2ZlYjA2MzU3OTZxXvhE: 00:18:24.937 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.937 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:24.937 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.937 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.937 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.937 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.937 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:24.937 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:25.195 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:25.195 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.195 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:25.195 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:25.195 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:25.195 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.195 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:18:25.195 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.195 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.195 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.195 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.195 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.195 00:18:25.453 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.453 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.453 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.453 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.453 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.453 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.453 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.453 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.453 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.453 { 00:18:25.453 "cntlid": 15, 00:18:25.453 "qid": 0, 00:18:25.453 "state": "enabled", 00:18:25.453 "thread": "nvmf_tgt_poll_group_000", 00:18:25.453 "listen_address": { 00:18:25.453 "trtype": "RDMA", 00:18:25.453 "adrfam": "IPv4", 00:18:25.453 "traddr": "192.168.100.8", 00:18:25.453 "trsvcid": "4420" 00:18:25.453 }, 00:18:25.453 "peer_address": { 00:18:25.453 "trtype": "RDMA", 00:18:25.453 "adrfam": "IPv4", 00:18:25.453 "traddr": "192.168.100.8", 00:18:25.453 "trsvcid": "60380" 00:18:25.453 }, 00:18:25.453 "auth": { 00:18:25.453 "state": "completed", 00:18:25.453 "digest": "sha256", 00:18:25.453 "dhgroup": "ffdhe2048" 00:18:25.453 } 00:18:25.453 } 00:18:25.453 ]' 00:18:25.453 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.453 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.453 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.710 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:25.711 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.711 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.711 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.711 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.711 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE1NzE0NGZhNzRlYjcxNTkzNWMxNzRhNmRjOTFiZmYxMTAxOGU5MzE1OThhZmJhZTYxMDkxYTE0MGJkNDdiYlq6dDU=: 00:18:26.643 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.643 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:26.643 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.643 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.643 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.643 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:26.643 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.643 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:26.643 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:26.643 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:26.643 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.643 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:26.643 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:26.643 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:26.643 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.643 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.643 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.643 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.643 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.643 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.643 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.901 00:18:26.901 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.901 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.901 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.158 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.158 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.158 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.158 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.158 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.158 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.158 { 00:18:27.158 "cntlid": 17, 00:18:27.158 "qid": 0, 00:18:27.158 "state": "enabled", 00:18:27.158 "thread": "nvmf_tgt_poll_group_000", 00:18:27.158 "listen_address": { 00:18:27.158 "trtype": "RDMA", 00:18:27.158 "adrfam": "IPv4", 00:18:27.158 "traddr": "192.168.100.8", 00:18:27.158 "trsvcid": "4420" 00:18:27.158 }, 00:18:27.158 "peer_address": { 00:18:27.158 "trtype": "RDMA", 00:18:27.158 "adrfam": "IPv4", 00:18:27.158 "traddr": "192.168.100.8", 00:18:27.158 "trsvcid": "34920" 00:18:27.158 }, 00:18:27.158 "auth": { 00:18:27.158 "state": "completed", 00:18:27.158 "digest": "sha256", 00:18:27.159 "dhgroup": "ffdhe3072" 00:18:27.159 } 00:18:27.159 } 00:18:27.159 ]' 00:18:27.159 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.159 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.159 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.159 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:27.159 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.415 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.415 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.415 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.415 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTA4ODFjYjI0M2JkMDgyZmIzZjBkM2M0MzNjZGM5YTgxYzg3ZmZiN2U0NTcyN2M4RxA3hg==: --dhchap-ctrl-secret DHHC-1:03:MjlkMzQ5NTUzYjNjNDA4NjkzNTYzOTZhNTM4NmExMDlkNTYyMTZjMTBjOTQ4ODZhMGJmOGI1NGMzZTEzNTAwNROJz80=: 00:18:27.979 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.237 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:28.237 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.237 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.237 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.237 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.237 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:28.237 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:28.495 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:28.495 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.495 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:28.495 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:28.495 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:28.495 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.495 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.495 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.495 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.495 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.495 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.495 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.495 00:18:28.753 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.753 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.753 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.753 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.753 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.753 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.753 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.753 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.753 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.753 { 00:18:28.753 "cntlid": 19, 00:18:28.753 "qid": 0, 00:18:28.753 "state": "enabled", 00:18:28.753 "thread": "nvmf_tgt_poll_group_000", 00:18:28.753 "listen_address": { 00:18:28.753 "trtype": "RDMA", 00:18:28.753 "adrfam": "IPv4", 00:18:28.753 "traddr": "192.168.100.8", 00:18:28.753 "trsvcid": "4420" 00:18:28.753 }, 00:18:28.753 "peer_address": { 00:18:28.753 "trtype": "RDMA", 00:18:28.753 "adrfam": "IPv4", 00:18:28.753 "traddr": "192.168.100.8", 00:18:28.753 "trsvcid": "45915" 00:18:28.753 }, 00:18:28.753 "auth": { 00:18:28.753 "state": "completed", 00:18:28.753 "digest": "sha256", 00:18:28.753 "dhgroup": "ffdhe3072" 00:18:28.753 } 00:18:28.753 } 00:18:28.753 ]' 00:18:28.753 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.753 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:28.753 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.011 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:29.011 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.011 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.011 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.011 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.011 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjFlYzQ0YmMzNTNhNjY3MGQzNTljYzdlZWU3NDJjYjD+i/g4: --dhchap-ctrl-secret DHHC-1:02:NTFmMTc2NGJjOTZhOGFiZWU3NWIzNDE5YzhhMzVhMjkyMDNiMzg5YjFmNzA5NjcwMvoMkA==: 00:18:29.945 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.945 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:29.945 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.945 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.945 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.945 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.945 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:29.945 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:29.945 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:29.945 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.945 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:29.945 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:29.945 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:29.945 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.945 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.945 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.945 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.945 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.945 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.945 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.203 00:18:30.203 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.203 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.203 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.460 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.460 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.460 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.460 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.460 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.460 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.460 { 00:18:30.460 "cntlid": 21, 00:18:30.460 "qid": 0, 00:18:30.460 "state": "enabled", 00:18:30.460 "thread": "nvmf_tgt_poll_group_000", 00:18:30.460 "listen_address": { 00:18:30.460 "trtype": "RDMA", 00:18:30.460 "adrfam": "IPv4", 00:18:30.460 "traddr": "192.168.100.8", 00:18:30.460 "trsvcid": "4420" 00:18:30.460 }, 00:18:30.460 "peer_address": { 00:18:30.460 "trtype": "RDMA", 00:18:30.460 "adrfam": "IPv4", 00:18:30.460 "traddr": "192.168.100.8", 00:18:30.460 "trsvcid": "47876" 00:18:30.460 }, 00:18:30.460 "auth": { 00:18:30.460 "state": "completed", 00:18:30.460 "digest": "sha256", 00:18:30.460 "dhgroup": "ffdhe3072" 00:18:30.460 } 00:18:30.460 } 00:18:30.460 ]' 00:18:30.460 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.461 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:30.461 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.461 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:30.461 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.717 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.717 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.717 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.717 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjYzMmM1MzEyNTE1OWNkMGI0MjA3ZDUzOTY4Zjk1YWM0MjljM2Y5ZjNlNDkwZDYwK2jCRA==: --dhchap-ctrl-secret DHHC-1:01:M2U2M2RmODcwODMxOGI3NmQxMmI5M2ZlYjA2MzU3OTZxXvhE: 00:18:31.282 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.539 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:31.539 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.539 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.539 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.539 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.539 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:31.539 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:31.798 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:31.798 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.798 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:31.798 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:31.798 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:31.798 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.798 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:18:31.798 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.798 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.798 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.798 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.798 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.798 00:18:31.798 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.798 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.798 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.055 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.056 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.056 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.056 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.056 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.056 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.056 { 00:18:32.056 "cntlid": 23, 00:18:32.056 "qid": 0, 00:18:32.056 "state": "enabled", 00:18:32.056 "thread": "nvmf_tgt_poll_group_000", 00:18:32.056 "listen_address": { 00:18:32.056 "trtype": "RDMA", 00:18:32.056 "adrfam": "IPv4", 00:18:32.056 "traddr": "192.168.100.8", 00:18:32.056 "trsvcid": "4420" 00:18:32.056 }, 00:18:32.056 "peer_address": { 00:18:32.056 "trtype": "RDMA", 00:18:32.056 "adrfam": "IPv4", 00:18:32.056 "traddr": "192.168.100.8", 00:18:32.056 "trsvcid": "44598" 00:18:32.056 }, 00:18:32.056 "auth": { 00:18:32.056 "state": "completed", 00:18:32.056 "digest": "sha256", 00:18:32.056 "dhgroup": "ffdhe3072" 00:18:32.056 } 00:18:32.056 } 00:18:32.056 ]' 00:18:32.056 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.056 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:32.056 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.314 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:32.314 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.314 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.314 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.314 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.314 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE1NzE0NGZhNzRlYjcxNTkzNWMxNzRhNmRjOTFiZmYxMTAxOGU5MzE1OThhZmJhZTYxMDkxYTE0MGJkNDdiYlq6dDU=: 00:18:33.245 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.245 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:33.245 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.245 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.245 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.245 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.245 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.245 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:33.245 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:33.245 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:33.245 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.245 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:33.245 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:33.245 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:33.245 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.245 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.245 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.245 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.245 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.245 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.245 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.502 00:18:33.503 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.503 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.503 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.760 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.760 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.760 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.760 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.760 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.760 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.760 { 00:18:33.760 "cntlid": 25, 00:18:33.760 "qid": 0, 00:18:33.760 "state": "enabled", 00:18:33.760 "thread": "nvmf_tgt_poll_group_000", 00:18:33.760 "listen_address": { 00:18:33.760 "trtype": "RDMA", 00:18:33.760 "adrfam": "IPv4", 00:18:33.760 "traddr": "192.168.100.8", 00:18:33.760 "trsvcid": "4420" 00:18:33.760 }, 00:18:33.760 "peer_address": { 00:18:33.760 "trtype": "RDMA", 00:18:33.760 "adrfam": "IPv4", 00:18:33.760 "traddr": "192.168.100.8", 00:18:33.760 "trsvcid": "55322" 00:18:33.760 }, 00:18:33.760 "auth": { 00:18:33.760 "state": "completed", 00:18:33.760 "digest": "sha256", 00:18:33.760 "dhgroup": "ffdhe4096" 00:18:33.760 } 00:18:33.760 } 00:18:33.760 ]' 00:18:33.760 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.760 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:33.760 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.760 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:33.760 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.017 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.017 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.017 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.017 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTA4ODFjYjI0M2JkMDgyZmIzZjBkM2M0MzNjZGM5YTgxYzg3ZmZiN2U0NTcyN2M4RxA3hg==: --dhchap-ctrl-secret DHHC-1:03:MjlkMzQ5NTUzYjNjNDA4NjkzNTYzOTZhNTM4NmExMDlkNTYyMTZjMTBjOTQ4ODZhMGJmOGI1NGMzZTEzNTAwNROJz80=: 00:18:34.582 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.840 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:34.840 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.840 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.840 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.840 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.840 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:34.840 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:35.097 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:35.097 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.097 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:35.097 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:35.097 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:35.097 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.097 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.097 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.097 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.097 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.097 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.097 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.354 00:18:35.354 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.355 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.355 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.355 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.355 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.355 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.355 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.355 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.355 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.355 { 00:18:35.355 "cntlid": 27, 00:18:35.355 "qid": 0, 00:18:35.355 "state": "enabled", 00:18:35.355 "thread": "nvmf_tgt_poll_group_000", 00:18:35.355 "listen_address": { 00:18:35.355 "trtype": "RDMA", 00:18:35.355 "adrfam": "IPv4", 00:18:35.355 "traddr": "192.168.100.8", 00:18:35.355 "trsvcid": "4420" 00:18:35.355 }, 00:18:35.355 "peer_address": { 00:18:35.355 "trtype": "RDMA", 00:18:35.355 "adrfam": "IPv4", 00:18:35.355 "traddr": "192.168.100.8", 00:18:35.355 "trsvcid": "34327" 00:18:35.355 }, 00:18:35.355 "auth": { 00:18:35.355 "state": "completed", 00:18:35.355 "digest": "sha256", 00:18:35.355 "dhgroup": "ffdhe4096" 00:18:35.355 } 00:18:35.355 } 00:18:35.355 ]' 00:18:35.355 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.355 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:35.355 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.612 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:35.612 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.612 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.612 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.612 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.612 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjFlYzQ0YmMzNTNhNjY3MGQzNTljYzdlZWU3NDJjYjD+i/g4: --dhchap-ctrl-secret DHHC-1:02:NTFmMTc2NGJjOTZhOGFiZWU3NWIzNDE5YzhhMzVhMjkyMDNiMzg5YjFmNzA5NjcwMvoMkA==: 00:18:36.241 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.497 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:36.497 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.497 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.497 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.497 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.497 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:36.497 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:36.754 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:36.754 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.754 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:36.754 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:36.754 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:36.754 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.754 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.754 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.754 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.754 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.754 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.754 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.011 00:18:37.011 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.011 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.011 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.011 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.011 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.011 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.011 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.268 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.268 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.268 { 00:18:37.268 "cntlid": 29, 00:18:37.268 "qid": 0, 00:18:37.268 "state": "enabled", 00:18:37.268 "thread": "nvmf_tgt_poll_group_000", 00:18:37.268 "listen_address": { 00:18:37.268 "trtype": "RDMA", 00:18:37.268 "adrfam": "IPv4", 00:18:37.268 "traddr": "192.168.100.8", 00:18:37.268 "trsvcid": "4420" 00:18:37.268 }, 00:18:37.268 "peer_address": { 00:18:37.268 "trtype": "RDMA", 00:18:37.268 "adrfam": "IPv4", 00:18:37.268 "traddr": "192.168.100.8", 00:18:37.268 "trsvcid": "45110" 00:18:37.268 }, 00:18:37.268 "auth": { 00:18:37.268 "state": "completed", 00:18:37.268 "digest": "sha256", 00:18:37.268 "dhgroup": "ffdhe4096" 00:18:37.268 } 00:18:37.268 } 00:18:37.268 ]' 00:18:37.268 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.268 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:37.268 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.268 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:37.268 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.268 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.268 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.268 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.525 10:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjYzMmM1MzEyNTE1OWNkMGI0MjA3ZDUzOTY4Zjk1YWM0MjljM2Y5ZjNlNDkwZDYwK2jCRA==: --dhchap-ctrl-secret DHHC-1:01:M2U2M2RmODcwODMxOGI3NmQxMmI5M2ZlYjA2MzU3OTZxXvhE: 00:18:38.090 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.090 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:38.090 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.090 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.090 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.090 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.090 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:38.090 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:38.348 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:38.348 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.348 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:38.348 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:38.348 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:38.348 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.348 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:18:38.348 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.348 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.348 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.348 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.348 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.605 00:18:38.605 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.605 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.605 10:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.862 10:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.862 10:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.862 10:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.862 10:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.862 10:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.862 10:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.862 { 00:18:38.862 "cntlid": 31, 00:18:38.862 "qid": 0, 00:18:38.862 "state": "enabled", 00:18:38.862 "thread": "nvmf_tgt_poll_group_000", 00:18:38.862 "listen_address": { 00:18:38.862 "trtype": "RDMA", 00:18:38.862 "adrfam": "IPv4", 00:18:38.862 "traddr": "192.168.100.8", 00:18:38.862 "trsvcid": "4420" 00:18:38.862 }, 00:18:38.862 "peer_address": { 00:18:38.862 "trtype": "RDMA", 00:18:38.862 "adrfam": "IPv4", 00:18:38.862 "traddr": "192.168.100.8", 00:18:38.862 "trsvcid": "44095" 00:18:38.862 }, 00:18:38.862 "auth": { 00:18:38.862 "state": "completed", 00:18:38.862 "digest": "sha256", 00:18:38.862 "dhgroup": "ffdhe4096" 00:18:38.862 } 00:18:38.862 } 00:18:38.862 ]' 00:18:38.862 10:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.862 10:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.862 10:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.862 10:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:38.862 10:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.862 10:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.862 10:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.862 10:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.120 10:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE1NzE0NGZhNzRlYjcxNTkzNWMxNzRhNmRjOTFiZmYxMTAxOGU5MzE1OThhZmJhZTYxMDkxYTE0MGJkNDdiYlq6dDU=: 00:18:39.683 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.941 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:39.941 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.941 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.941 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.941 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:39.941 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.941 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:39.941 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:39.941 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:39.941 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.941 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:39.941 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:39.941 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:39.941 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.941 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.941 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.941 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.941 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.941 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.941 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.505 00:18:40.505 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.505 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.505 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.505 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.505 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.505 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.505 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.505 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.505 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.505 { 00:18:40.505 "cntlid": 33, 00:18:40.505 "qid": 0, 00:18:40.505 "state": "enabled", 00:18:40.505 "thread": "nvmf_tgt_poll_group_000", 00:18:40.505 "listen_address": { 00:18:40.505 "trtype": "RDMA", 00:18:40.505 "adrfam": "IPv4", 00:18:40.505 "traddr": "192.168.100.8", 00:18:40.505 "trsvcid": "4420" 00:18:40.505 }, 00:18:40.505 "peer_address": { 00:18:40.505 "trtype": "RDMA", 00:18:40.505 "adrfam": "IPv4", 00:18:40.505 "traddr": "192.168.100.8", 00:18:40.505 "trsvcid": "41925" 00:18:40.505 }, 00:18:40.505 "auth": { 00:18:40.505 "state": "completed", 00:18:40.505 "digest": "sha256", 00:18:40.505 "dhgroup": "ffdhe6144" 00:18:40.505 } 00:18:40.505 } 00:18:40.505 ]' 00:18:40.505 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.505 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.505 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.505 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:40.762 10:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.762 10:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.762 10:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.762 10:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.762 10:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTA4ODFjYjI0M2JkMDgyZmIzZjBkM2M0MzNjZGM5YTgxYzg3ZmZiN2U0NTcyN2M4RxA3hg==: --dhchap-ctrl-secret DHHC-1:03:MjlkMzQ5NTUzYjNjNDA4NjkzNTYzOTZhNTM4NmExMDlkNTYyMTZjMTBjOTQ4ODZhMGJmOGI1NGMzZTEzNTAwNROJz80=: 00:18:41.693 10:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.693 10:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:41.693 10:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.693 10:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.693 10:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.693 10:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.693 10:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:41.693 10:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:41.694 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:41.694 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.694 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:41.694 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:41.694 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:41.694 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.694 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.694 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.694 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.694 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.694 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.694 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.258 00:18:42.258 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.258 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.258 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.258 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.258 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.258 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.258 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.258 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.258 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.258 { 00:18:42.258 "cntlid": 35, 00:18:42.258 "qid": 0, 00:18:42.258 "state": "enabled", 00:18:42.258 "thread": "nvmf_tgt_poll_group_000", 00:18:42.258 "listen_address": { 00:18:42.258 "trtype": "RDMA", 00:18:42.258 "adrfam": "IPv4", 00:18:42.258 "traddr": "192.168.100.8", 00:18:42.258 "trsvcid": "4420" 00:18:42.258 }, 00:18:42.258 "peer_address": { 00:18:42.258 "trtype": "RDMA", 00:18:42.258 "adrfam": "IPv4", 00:18:42.258 "traddr": "192.168.100.8", 00:18:42.258 "trsvcid": "50626" 00:18:42.258 }, 00:18:42.258 "auth": { 00:18:42.258 "state": "completed", 00:18:42.258 "digest": "sha256", 00:18:42.258 "dhgroup": "ffdhe6144" 00:18:42.258 } 00:18:42.258 } 00:18:42.258 ]' 00:18:42.258 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.258 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:42.258 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.514 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:42.514 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.514 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.514 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.514 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.771 10:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjFlYzQ0YmMzNTNhNjY3MGQzNTljYzdlZWU3NDJjYjD+i/g4: --dhchap-ctrl-secret DHHC-1:02:NTFmMTc2NGJjOTZhOGFiZWU3NWIzNDE5YzhhMzVhMjkyMDNiMzg5YjFmNzA5NjcwMvoMkA==: 00:18:43.335 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.335 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:43.335 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.335 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.335 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.335 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.335 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:43.335 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:43.602 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:43.602 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.602 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:43.602 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:43.602 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:43.602 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.602 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.602 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.602 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.602 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.602 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.602 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.859 00:18:43.859 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.859 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.859 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.116 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.116 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.116 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.116 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.116 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.116 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.116 { 00:18:44.117 "cntlid": 37, 00:18:44.117 "qid": 0, 00:18:44.117 "state": "enabled", 00:18:44.117 "thread": "nvmf_tgt_poll_group_000", 00:18:44.117 "listen_address": { 00:18:44.117 "trtype": "RDMA", 00:18:44.117 "adrfam": "IPv4", 00:18:44.117 "traddr": "192.168.100.8", 00:18:44.117 "trsvcid": "4420" 00:18:44.117 }, 00:18:44.117 "peer_address": { 00:18:44.117 "trtype": "RDMA", 00:18:44.117 "adrfam": "IPv4", 00:18:44.117 "traddr": "192.168.100.8", 00:18:44.117 "trsvcid": "37606" 00:18:44.117 }, 00:18:44.117 "auth": { 00:18:44.117 "state": "completed", 00:18:44.117 "digest": "sha256", 00:18:44.117 "dhgroup": "ffdhe6144" 00:18:44.117 } 00:18:44.117 } 00:18:44.117 ]' 00:18:44.117 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.117 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:44.117 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.117 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:44.117 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.117 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.117 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.117 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.374 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjYzMmM1MzEyNTE1OWNkMGI0MjA3ZDUzOTY4Zjk1YWM0MjljM2Y5ZjNlNDkwZDYwK2jCRA==: --dhchap-ctrl-secret DHHC-1:01:M2U2M2RmODcwODMxOGI3NmQxMmI5M2ZlYjA2MzU3OTZxXvhE: 00:18:44.935 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.192 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:45.192 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.192 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.192 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.192 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.192 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:45.192 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:45.449 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:45.449 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.449 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:45.449 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:45.449 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:45.449 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.449 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:18:45.449 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.449 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.449 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.449 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.449 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.705 00:18:45.705 10:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.705 10:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.705 10:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.963 10:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.963 10:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.963 10:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.963 10:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.963 10:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.963 10:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.963 { 00:18:45.963 "cntlid": 39, 00:18:45.963 "qid": 0, 00:18:45.963 "state": "enabled", 00:18:45.963 "thread": "nvmf_tgt_poll_group_000", 00:18:45.963 "listen_address": { 00:18:45.963 "trtype": "RDMA", 00:18:45.963 "adrfam": "IPv4", 00:18:45.963 "traddr": "192.168.100.8", 00:18:45.963 "trsvcid": "4420" 00:18:45.963 }, 00:18:45.963 "peer_address": { 00:18:45.963 "trtype": "RDMA", 00:18:45.963 "adrfam": "IPv4", 00:18:45.963 "traddr": "192.168.100.8", 00:18:45.963 "trsvcid": "35751" 00:18:45.963 }, 00:18:45.963 "auth": { 00:18:45.963 "state": "completed", 00:18:45.963 "digest": "sha256", 00:18:45.963 "dhgroup": "ffdhe6144" 00:18:45.963 } 00:18:45.963 } 00:18:45.963 ]' 00:18:45.963 10:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.963 10:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.963 10:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.963 10:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:45.963 10:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.963 10:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.963 10:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.963 10:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.219 10:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE1NzE0NGZhNzRlYjcxNTkzNWMxNzRhNmRjOTFiZmYxMTAxOGU5MzE1OThhZmJhZTYxMDkxYTE0MGJkNDdiYlq6dDU=: 00:18:46.782 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.782 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:46.782 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.783 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.040 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.040 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:47.040 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.040 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:47.040 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:47.040 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:47.040 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.040 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:47.040 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:47.040 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:47.040 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.040 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.040 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.040 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.040 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.040 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.040 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.603 00:18:47.603 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.603 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.603 10:40:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.860 10:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.860 10:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.860 10:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.860 10:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.860 10:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.860 10:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.860 { 00:18:47.860 "cntlid": 41, 00:18:47.860 "qid": 0, 00:18:47.860 "state": "enabled", 00:18:47.860 "thread": "nvmf_tgt_poll_group_000", 00:18:47.860 "listen_address": { 00:18:47.860 "trtype": "RDMA", 00:18:47.860 "adrfam": "IPv4", 00:18:47.860 "traddr": "192.168.100.8", 00:18:47.860 "trsvcid": "4420" 00:18:47.860 }, 00:18:47.860 "peer_address": { 00:18:47.860 "trtype": "RDMA", 00:18:47.860 "adrfam": "IPv4", 00:18:47.860 "traddr": "192.168.100.8", 00:18:47.860 "trsvcid": "41667" 00:18:47.860 }, 00:18:47.860 "auth": { 00:18:47.860 "state": "completed", 00:18:47.860 "digest": "sha256", 00:18:47.860 "dhgroup": "ffdhe8192" 00:18:47.860 } 00:18:47.860 } 00:18:47.860 ]' 00:18:47.860 10:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.860 10:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.860 10:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.860 10:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:47.861 10:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.861 10:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.861 10:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.861 10:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.117 10:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTA4ODFjYjI0M2JkMDgyZmIzZjBkM2M0MzNjZGM5YTgxYzg3ZmZiN2U0NTcyN2M4RxA3hg==: --dhchap-ctrl-secret DHHC-1:03:MjlkMzQ5NTUzYjNjNDA4NjkzNTYzOTZhNTM4NmExMDlkNTYyMTZjMTBjOTQ4ODZhMGJmOGI1NGMzZTEzNTAwNROJz80=: 00:18:48.680 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.937 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:48.937 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.937 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.937 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.937 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.937 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:48.937 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:48.937 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:48.937 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.937 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:48.937 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:48.937 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:48.937 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.937 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.937 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.937 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.937 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.937 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.937 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.501 00:18:49.501 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.501 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.501 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.758 10:40:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.758 10:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.758 10:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.758 10:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.758 10:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.758 10:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.758 { 00:18:49.758 "cntlid": 43, 00:18:49.758 "qid": 0, 00:18:49.758 "state": "enabled", 00:18:49.758 "thread": "nvmf_tgt_poll_group_000", 00:18:49.758 "listen_address": { 00:18:49.758 "trtype": "RDMA", 00:18:49.758 "adrfam": "IPv4", 00:18:49.758 "traddr": "192.168.100.8", 00:18:49.758 "trsvcid": "4420" 00:18:49.758 }, 00:18:49.758 "peer_address": { 00:18:49.758 "trtype": "RDMA", 00:18:49.758 "adrfam": "IPv4", 00:18:49.758 "traddr": "192.168.100.8", 00:18:49.758 "trsvcid": "36721" 00:18:49.758 }, 00:18:49.758 "auth": { 00:18:49.758 "state": "completed", 00:18:49.758 "digest": "sha256", 00:18:49.758 "dhgroup": "ffdhe8192" 00:18:49.758 } 00:18:49.758 } 00:18:49.758 ]' 00:18:49.758 10:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.758 10:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.758 10:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.758 10:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:49.758 10:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.758 10:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.758 10:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.758 10:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.015 10:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjFlYzQ0YmMzNTNhNjY3MGQzNTljYzdlZWU3NDJjYjD+i/g4: --dhchap-ctrl-secret DHHC-1:02:NTFmMTc2NGJjOTZhOGFiZWU3NWIzNDE5YzhhMzVhMjkyMDNiMzg5YjFmNzA5NjcwMvoMkA==: 00:18:50.579 10:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.836 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:50.836 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.836 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.836 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.836 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.836 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:50.836 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:50.836 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:50.836 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.836 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:50.836 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:50.836 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:50.836 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.836 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.836 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.836 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.836 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.836 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.836 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.400 00:18:51.400 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.400 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.400 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.657 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.657 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.657 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.657 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.657 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.657 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.657 { 00:18:51.657 "cntlid": 45, 00:18:51.657 "qid": 0, 00:18:51.657 "state": "enabled", 00:18:51.657 "thread": "nvmf_tgt_poll_group_000", 00:18:51.657 "listen_address": { 00:18:51.657 "trtype": "RDMA", 00:18:51.657 "adrfam": "IPv4", 00:18:51.657 "traddr": "192.168.100.8", 00:18:51.657 "trsvcid": "4420" 00:18:51.657 }, 00:18:51.657 "peer_address": { 00:18:51.657 "trtype": "RDMA", 00:18:51.657 "adrfam": "IPv4", 00:18:51.657 "traddr": "192.168.100.8", 00:18:51.657 "trsvcid": "46610" 00:18:51.657 }, 00:18:51.657 "auth": { 00:18:51.657 "state": "completed", 00:18:51.657 "digest": "sha256", 00:18:51.657 "dhgroup": "ffdhe8192" 00:18:51.657 } 00:18:51.657 } 00:18:51.657 ]' 00:18:51.657 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.657 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.657 10:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.657 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:51.657 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.657 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.657 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.657 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.914 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjYzMmM1MzEyNTE1OWNkMGI0MjA3ZDUzOTY4Zjk1YWM0MjljM2Y5ZjNlNDkwZDYwK2jCRA==: --dhchap-ctrl-secret DHHC-1:01:M2U2M2RmODcwODMxOGI3NmQxMmI5M2ZlYjA2MzU3OTZxXvhE: 00:18:52.477 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.735 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:52.735 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.735 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.735 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.735 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.735 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:52.735 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:52.735 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:52.735 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.735 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:52.735 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:52.735 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:52.735 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.735 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:18:52.735 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.735 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.735 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.735 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.735 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.298 00:18:53.298 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.298 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.298 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.555 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.555 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.555 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.555 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.555 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.555 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.555 { 00:18:53.555 "cntlid": 47, 00:18:53.555 "qid": 0, 00:18:53.555 "state": "enabled", 00:18:53.555 "thread": "nvmf_tgt_poll_group_000", 00:18:53.555 "listen_address": { 00:18:53.555 "trtype": "RDMA", 00:18:53.555 "adrfam": "IPv4", 00:18:53.555 "traddr": "192.168.100.8", 00:18:53.556 "trsvcid": "4420" 00:18:53.556 }, 00:18:53.556 "peer_address": { 00:18:53.556 "trtype": "RDMA", 00:18:53.556 "adrfam": "IPv4", 00:18:53.556 "traddr": "192.168.100.8", 00:18:53.556 "trsvcid": "53436" 00:18:53.556 }, 00:18:53.556 "auth": { 00:18:53.556 "state": "completed", 00:18:53.556 "digest": "sha256", 00:18:53.556 "dhgroup": "ffdhe8192" 00:18:53.556 } 00:18:53.556 } 00:18:53.556 ]' 00:18:53.556 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.556 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:53.556 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.556 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:53.556 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.556 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.556 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.556 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.813 10:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE1NzE0NGZhNzRlYjcxNTkzNWMxNzRhNmRjOTFiZmYxMTAxOGU5MzE1OThhZmJhZTYxMDkxYTE0MGJkNDdiYlq6dDU=: 00:18:54.377 10:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.635 10:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:54.635 10:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.635 10:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.635 10:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.635 10:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:54.635 10:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:54.635 10:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.635 10:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:54.635 10:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:54.635 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:54.635 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.635 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:54.635 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:54.635 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:54.635 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.635 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.635 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.635 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.893 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.893 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.893 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.893 00:18:54.893 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.893 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.893 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.151 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.151 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.151 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.151 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.151 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.151 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.151 { 00:18:55.151 "cntlid": 49, 00:18:55.151 "qid": 0, 00:18:55.151 "state": "enabled", 00:18:55.151 "thread": "nvmf_tgt_poll_group_000", 00:18:55.151 "listen_address": { 00:18:55.151 "trtype": "RDMA", 00:18:55.151 "adrfam": "IPv4", 00:18:55.151 "traddr": "192.168.100.8", 00:18:55.151 "trsvcid": "4420" 00:18:55.151 }, 00:18:55.151 "peer_address": { 00:18:55.151 "trtype": "RDMA", 00:18:55.151 "adrfam": "IPv4", 00:18:55.151 "traddr": "192.168.100.8", 00:18:55.151 "trsvcid": "44976" 00:18:55.151 }, 00:18:55.151 "auth": { 00:18:55.151 "state": "completed", 00:18:55.151 "digest": "sha384", 00:18:55.151 "dhgroup": "null" 00:18:55.151 } 00:18:55.151 } 00:18:55.151 ]' 00:18:55.151 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.151 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.151 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.151 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:55.151 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.408 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.408 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.408 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.408 10:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTA4ODFjYjI0M2JkMDgyZmIzZjBkM2M0MzNjZGM5YTgxYzg3ZmZiN2U0NTcyN2M4RxA3hg==: --dhchap-ctrl-secret DHHC-1:03:MjlkMzQ5NTUzYjNjNDA4NjkzNTYzOTZhNTM4NmExMDlkNTYyMTZjMTBjOTQ4ODZhMGJmOGI1NGMzZTEzNTAwNROJz80=: 00:18:56.341 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.341 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:56.341 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.341 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.341 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.341 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.341 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:56.341 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:56.341 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:56.341 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.341 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:56.341 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:56.341 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:56.341 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.341 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.342 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.342 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.342 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.342 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.342 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.600 00:18:56.600 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.600 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.600 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.858 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.858 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.858 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.858 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.858 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.858 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.858 { 00:18:56.858 "cntlid": 51, 00:18:56.858 "qid": 0, 00:18:56.858 "state": "enabled", 00:18:56.858 "thread": "nvmf_tgt_poll_group_000", 00:18:56.858 "listen_address": { 00:18:56.858 "trtype": "RDMA", 00:18:56.858 "adrfam": "IPv4", 00:18:56.858 "traddr": "192.168.100.8", 00:18:56.858 "trsvcid": "4420" 00:18:56.858 }, 00:18:56.858 "peer_address": { 00:18:56.858 "trtype": "RDMA", 00:18:56.858 "adrfam": "IPv4", 00:18:56.858 "traddr": "192.168.100.8", 00:18:56.858 "trsvcid": "56549" 00:18:56.858 }, 00:18:56.858 "auth": { 00:18:56.858 "state": "completed", 00:18:56.858 "digest": "sha384", 00:18:56.858 "dhgroup": "null" 00:18:56.858 } 00:18:56.858 } 00:18:56.858 ]' 00:18:56.858 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.858 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.858 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.858 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:56.858 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.858 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.858 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.858 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.116 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjFlYzQ0YmMzNTNhNjY3MGQzNTljYzdlZWU3NDJjYjD+i/g4: --dhchap-ctrl-secret DHHC-1:02:NTFmMTc2NGJjOTZhOGFiZWU3NWIzNDE5YzhhMzVhMjkyMDNiMzg5YjFmNzA5NjcwMvoMkA==: 00:18:57.749 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.749 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:57.749 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.749 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.749 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.749 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.749 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:57.749 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:58.008 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:58.008 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.008 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:58.008 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:58.008 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:58.008 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.008 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.008 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.008 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.008 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.008 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.008 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.268 00:18:58.268 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.268 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.268 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.527 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.527 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.527 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.527 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.527 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.527 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.527 { 00:18:58.527 "cntlid": 53, 00:18:58.527 "qid": 0, 00:18:58.527 "state": "enabled", 00:18:58.527 "thread": "nvmf_tgt_poll_group_000", 00:18:58.527 "listen_address": { 00:18:58.527 "trtype": "RDMA", 00:18:58.527 "adrfam": "IPv4", 00:18:58.527 "traddr": "192.168.100.8", 00:18:58.527 "trsvcid": "4420" 00:18:58.527 }, 00:18:58.527 "peer_address": { 00:18:58.527 "trtype": "RDMA", 00:18:58.527 "adrfam": "IPv4", 00:18:58.527 "traddr": "192.168.100.8", 00:18:58.527 "trsvcid": "52472" 00:18:58.527 }, 00:18:58.527 "auth": { 00:18:58.527 "state": "completed", 00:18:58.527 "digest": "sha384", 00:18:58.527 "dhgroup": "null" 00:18:58.527 } 00:18:58.527 } 00:18:58.527 ]' 00:18:58.527 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.527 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.527 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.527 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:58.527 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.527 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.527 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.527 10:41:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.785 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjYzMmM1MzEyNTE1OWNkMGI0MjA3ZDUzOTY4Zjk1YWM0MjljM2Y5ZjNlNDkwZDYwK2jCRA==: --dhchap-ctrl-secret DHHC-1:01:M2U2M2RmODcwODMxOGI3NmQxMmI5M2ZlYjA2MzU3OTZxXvhE: 00:18:59.377 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.636 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:59.636 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.636 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.636 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.636 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.636 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:59.636 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:59.636 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:59.636 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.636 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:59.636 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:59.636 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:59.636 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.636 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:18:59.636 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.636 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.636 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.636 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.636 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.895 00:18:59.895 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.895 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.895 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.155 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.155 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.155 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.155 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.155 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.155 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.155 { 00:19:00.155 "cntlid": 55, 00:19:00.155 "qid": 0, 00:19:00.155 "state": "enabled", 00:19:00.155 "thread": "nvmf_tgt_poll_group_000", 00:19:00.155 "listen_address": { 00:19:00.155 "trtype": "RDMA", 00:19:00.155 "adrfam": "IPv4", 00:19:00.155 "traddr": "192.168.100.8", 00:19:00.155 "trsvcid": "4420" 00:19:00.155 }, 00:19:00.155 "peer_address": { 00:19:00.155 "trtype": "RDMA", 00:19:00.155 "adrfam": "IPv4", 00:19:00.155 "traddr": "192.168.100.8", 00:19:00.155 "trsvcid": "37258" 00:19:00.155 }, 00:19:00.155 "auth": { 00:19:00.155 "state": "completed", 00:19:00.155 "digest": "sha384", 00:19:00.155 "dhgroup": "null" 00:19:00.155 } 00:19:00.155 } 00:19:00.155 ]' 00:19:00.155 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.155 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:00.155 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.155 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:00.155 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.155 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.155 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.155 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.414 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE1NzE0NGZhNzRlYjcxNTkzNWMxNzRhNmRjOTFiZmYxMTAxOGU5MzE1OThhZmJhZTYxMDkxYTE0MGJkNDdiYlq6dDU=: 00:19:00.982 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.242 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:01.242 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.243 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.243 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.243 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.243 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.243 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:01.243 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:01.243 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:01.243 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.243 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:01.243 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:01.243 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:01.243 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.243 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.243 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.243 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.243 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.243 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.243 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.502 00:19:01.502 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.502 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.502 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.761 10:41:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.761 10:41:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.761 10:41:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.761 10:41:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.761 10:41:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.761 10:41:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.761 { 00:19:01.761 "cntlid": 57, 00:19:01.761 "qid": 0, 00:19:01.761 "state": "enabled", 00:19:01.761 "thread": "nvmf_tgt_poll_group_000", 00:19:01.761 "listen_address": { 00:19:01.761 "trtype": "RDMA", 00:19:01.761 "adrfam": "IPv4", 00:19:01.761 "traddr": "192.168.100.8", 00:19:01.761 "trsvcid": "4420" 00:19:01.761 }, 00:19:01.761 "peer_address": { 00:19:01.761 "trtype": "RDMA", 00:19:01.761 "adrfam": "IPv4", 00:19:01.761 "traddr": "192.168.100.8", 00:19:01.761 "trsvcid": "34025" 00:19:01.761 }, 00:19:01.761 "auth": { 00:19:01.761 "state": "completed", 00:19:01.761 "digest": "sha384", 00:19:01.761 "dhgroup": "ffdhe2048" 00:19:01.761 } 00:19:01.761 } 00:19:01.761 ]' 00:19:01.761 10:41:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.761 10:41:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.761 10:41:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.761 10:41:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:01.761 10:41:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.020 10:41:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.020 10:41:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.020 10:41:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.020 10:41:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTA4ODFjYjI0M2JkMDgyZmIzZjBkM2M0MzNjZGM5YTgxYzg3ZmZiN2U0NTcyN2M4RxA3hg==: --dhchap-ctrl-secret DHHC-1:03:MjlkMzQ5NTUzYjNjNDA4NjkzNTYzOTZhNTM4NmExMDlkNTYyMTZjMTBjOTQ4ODZhMGJmOGI1NGMzZTEzNTAwNROJz80=: 00:19:02.590 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.848 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:02.848 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.848 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.848 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.848 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.848 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:02.848 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:03.107 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:03.107 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.107 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:03.107 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:03.107 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:03.107 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.107 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.107 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.107 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.107 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.107 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.107 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.366 00:19:03.366 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.366 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.366 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.366 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.366 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.366 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.366 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.366 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.366 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.366 { 00:19:03.366 "cntlid": 59, 00:19:03.366 "qid": 0, 00:19:03.366 "state": "enabled", 00:19:03.366 "thread": "nvmf_tgt_poll_group_000", 00:19:03.366 "listen_address": { 00:19:03.366 "trtype": "RDMA", 00:19:03.366 "adrfam": "IPv4", 00:19:03.366 "traddr": "192.168.100.8", 00:19:03.366 "trsvcid": "4420" 00:19:03.366 }, 00:19:03.366 "peer_address": { 00:19:03.366 "trtype": "RDMA", 00:19:03.366 "adrfam": "IPv4", 00:19:03.366 "traddr": "192.168.100.8", 00:19:03.366 "trsvcid": "51393" 00:19:03.366 }, 00:19:03.366 "auth": { 00:19:03.366 "state": "completed", 00:19:03.366 "digest": "sha384", 00:19:03.366 "dhgroup": "ffdhe2048" 00:19:03.366 } 00:19:03.366 } 00:19:03.366 ]' 00:19:03.366 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.625 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.625 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.625 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:03.625 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.625 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.625 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.625 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.884 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjFlYzQ0YmMzNTNhNjY3MGQzNTljYzdlZWU3NDJjYjD+i/g4: --dhchap-ctrl-secret DHHC-1:02:NTFmMTc2NGJjOTZhOGFiZWU3NWIzNDE5YzhhMzVhMjkyMDNiMzg5YjFmNzA5NjcwMvoMkA==: 00:19:04.450 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.450 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:04.450 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.450 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.450 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.450 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.450 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:04.450 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:04.709 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:04.709 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.709 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:04.709 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:04.709 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:04.709 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.709 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.709 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.709 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.709 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.709 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.709 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.967 00:19:04.967 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.967 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.967 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.225 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.225 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.225 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.225 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.225 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.226 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.226 { 00:19:05.226 "cntlid": 61, 00:19:05.226 "qid": 0, 00:19:05.226 "state": "enabled", 00:19:05.226 "thread": "nvmf_tgt_poll_group_000", 00:19:05.226 "listen_address": { 00:19:05.226 "trtype": "RDMA", 00:19:05.226 "adrfam": "IPv4", 00:19:05.226 "traddr": "192.168.100.8", 00:19:05.226 "trsvcid": "4420" 00:19:05.226 }, 00:19:05.226 "peer_address": { 00:19:05.226 "trtype": "RDMA", 00:19:05.226 "adrfam": "IPv4", 00:19:05.226 "traddr": "192.168.100.8", 00:19:05.226 "trsvcid": "54325" 00:19:05.226 }, 00:19:05.226 "auth": { 00:19:05.226 "state": "completed", 00:19:05.226 "digest": "sha384", 00:19:05.226 "dhgroup": "ffdhe2048" 00:19:05.226 } 00:19:05.226 } 00:19:05.226 ]' 00:19:05.226 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.226 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:05.226 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.226 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:05.226 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.226 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.226 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.226 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.484 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjYzMmM1MzEyNTE1OWNkMGI0MjA3ZDUzOTY4Zjk1YWM0MjljM2Y5ZjNlNDkwZDYwK2jCRA==: --dhchap-ctrl-secret DHHC-1:01:M2U2M2RmODcwODMxOGI3NmQxMmI5M2ZlYjA2MzU3OTZxXvhE: 00:19:06.058 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.323 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:06.323 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.323 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.323 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.323 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.323 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:06.323 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:06.323 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:06.323 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.323 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:06.323 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:06.323 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:06.323 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.323 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:19:06.323 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.323 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.323 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.323 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.323 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.581 00:19:06.581 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.581 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.581 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.839 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.839 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.839 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.839 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.839 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.839 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.839 { 00:19:06.839 "cntlid": 63, 00:19:06.839 "qid": 0, 00:19:06.839 "state": "enabled", 00:19:06.839 "thread": "nvmf_tgt_poll_group_000", 00:19:06.839 "listen_address": { 00:19:06.839 "trtype": "RDMA", 00:19:06.839 "adrfam": "IPv4", 00:19:06.839 "traddr": "192.168.100.8", 00:19:06.839 "trsvcid": "4420" 00:19:06.839 }, 00:19:06.839 "peer_address": { 00:19:06.839 "trtype": "RDMA", 00:19:06.839 "adrfam": "IPv4", 00:19:06.839 "traddr": "192.168.100.8", 00:19:06.839 "trsvcid": "46547" 00:19:06.839 }, 00:19:06.839 "auth": { 00:19:06.839 "state": "completed", 00:19:06.839 "digest": "sha384", 00:19:06.839 "dhgroup": "ffdhe2048" 00:19:06.839 } 00:19:06.839 } 00:19:06.839 ]' 00:19:06.839 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.839 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.839 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.839 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:06.839 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.097 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.097 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.097 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.097 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE1NzE0NGZhNzRlYjcxNTkzNWMxNzRhNmRjOTFiZmYxMTAxOGU5MzE1OThhZmJhZTYxMDkxYTE0MGJkNDdiYlq6dDU=: 00:19:07.665 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.923 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:07.923 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.923 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.923 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.923 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.923 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.923 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:07.923 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:08.182 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:08.182 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.182 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:08.182 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:08.182 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:08.182 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.182 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.182 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.182 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.182 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.182 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.182 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.441 00:19:08.441 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.441 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.441 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.441 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.441 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.441 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.441 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.441 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.441 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.441 { 00:19:08.441 "cntlid": 65, 00:19:08.441 "qid": 0, 00:19:08.441 "state": "enabled", 00:19:08.441 "thread": "nvmf_tgt_poll_group_000", 00:19:08.441 "listen_address": { 00:19:08.441 "trtype": "RDMA", 00:19:08.441 "adrfam": "IPv4", 00:19:08.441 "traddr": "192.168.100.8", 00:19:08.441 "trsvcid": "4420" 00:19:08.441 }, 00:19:08.441 "peer_address": { 00:19:08.441 "trtype": "RDMA", 00:19:08.441 "adrfam": "IPv4", 00:19:08.441 "traddr": "192.168.100.8", 00:19:08.441 "trsvcid": "41205" 00:19:08.441 }, 00:19:08.441 "auth": { 00:19:08.441 "state": "completed", 00:19:08.441 "digest": "sha384", 00:19:08.441 "dhgroup": "ffdhe3072" 00:19:08.441 } 00:19:08.441 } 00:19:08.441 ]' 00:19:08.441 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.700 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:08.700 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.700 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:08.700 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.700 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.700 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.700 10:41:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.958 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTA4ODFjYjI0M2JkMDgyZmIzZjBkM2M0MzNjZGM5YTgxYzg3ZmZiN2U0NTcyN2M4RxA3hg==: --dhchap-ctrl-secret DHHC-1:03:MjlkMzQ5NTUzYjNjNDA4NjkzNTYzOTZhNTM4NmExMDlkNTYyMTZjMTBjOTQ4ODZhMGJmOGI1NGMzZTEzNTAwNROJz80=: 00:19:09.525 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.525 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:09.525 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.525 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.525 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.525 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.525 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:09.525 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:09.783 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:09.783 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.783 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:09.783 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:09.783 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:09.783 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.783 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.783 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.783 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.783 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.783 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.783 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.041 00:19:10.041 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.041 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.041 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.299 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.299 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.299 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.299 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.299 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.299 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.299 { 00:19:10.299 "cntlid": 67, 00:19:10.299 "qid": 0, 00:19:10.299 "state": "enabled", 00:19:10.299 "thread": "nvmf_tgt_poll_group_000", 00:19:10.299 "listen_address": { 00:19:10.299 "trtype": "RDMA", 00:19:10.299 "adrfam": "IPv4", 00:19:10.299 "traddr": "192.168.100.8", 00:19:10.299 "trsvcid": "4420" 00:19:10.299 }, 00:19:10.299 "peer_address": { 00:19:10.299 "trtype": "RDMA", 00:19:10.299 "adrfam": "IPv4", 00:19:10.299 "traddr": "192.168.100.8", 00:19:10.299 "trsvcid": "57766" 00:19:10.299 }, 00:19:10.299 "auth": { 00:19:10.299 "state": "completed", 00:19:10.299 "digest": "sha384", 00:19:10.299 "dhgroup": "ffdhe3072" 00:19:10.299 } 00:19:10.299 } 00:19:10.299 ]' 00:19:10.299 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.299 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:10.299 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.299 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:10.299 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.299 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.299 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.299 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.558 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjFlYzQ0YmMzNTNhNjY3MGQzNTljYzdlZWU3NDJjYjD+i/g4: --dhchap-ctrl-secret DHHC-1:02:NTFmMTc2NGJjOTZhOGFiZWU3NWIzNDE5YzhhMzVhMjkyMDNiMzg5YjFmNzA5NjcwMvoMkA==: 00:19:11.124 10:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.382 10:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:11.382 10:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.382 10:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.382 10:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.382 10:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.382 10:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:11.382 10:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:11.382 10:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:11.382 10:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.382 10:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:11.382 10:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:11.382 10:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:11.382 10:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.382 10:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.382 10:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.382 10:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.382 10:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.382 10:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.382 10:41:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.640 00:19:11.640 10:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.640 10:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.640 10:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.898 10:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.898 10:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.898 10:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.898 10:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.898 10:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.898 10:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.898 { 00:19:11.898 "cntlid": 69, 00:19:11.898 "qid": 0, 00:19:11.898 "state": "enabled", 00:19:11.898 "thread": "nvmf_tgt_poll_group_000", 00:19:11.898 "listen_address": { 00:19:11.898 "trtype": "RDMA", 00:19:11.898 "adrfam": "IPv4", 00:19:11.898 "traddr": "192.168.100.8", 00:19:11.898 "trsvcid": "4420" 00:19:11.898 }, 00:19:11.898 "peer_address": { 00:19:11.898 "trtype": "RDMA", 00:19:11.898 "adrfam": "IPv4", 00:19:11.898 "traddr": "192.168.100.8", 00:19:11.898 "trsvcid": "37500" 00:19:11.898 }, 00:19:11.898 "auth": { 00:19:11.898 "state": "completed", 00:19:11.898 "digest": "sha384", 00:19:11.898 "dhgroup": "ffdhe3072" 00:19:11.898 } 00:19:11.898 } 00:19:11.898 ]' 00:19:11.898 10:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.898 10:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:11.898 10:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.898 10:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:11.898 10:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.898 10:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.898 10:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.898 10:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.155 10:41:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjYzMmM1MzEyNTE1OWNkMGI0MjA3ZDUzOTY4Zjk1YWM0MjljM2Y5ZjNlNDkwZDYwK2jCRA==: --dhchap-ctrl-secret DHHC-1:01:M2U2M2RmODcwODMxOGI3NmQxMmI5M2ZlYjA2MzU3OTZxXvhE: 00:19:12.721 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.978 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:12.978 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.978 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.978 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.978 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.978 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:12.978 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:13.236 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:13.236 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.236 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:13.236 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:13.236 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:13.236 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.236 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:19:13.236 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.236 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.236 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.236 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:13.236 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:13.236 00:19:13.494 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.494 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.494 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.494 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.494 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.494 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.494 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.494 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.494 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.494 { 00:19:13.494 "cntlid": 71, 00:19:13.494 "qid": 0, 00:19:13.494 "state": "enabled", 00:19:13.494 "thread": "nvmf_tgt_poll_group_000", 00:19:13.494 "listen_address": { 00:19:13.494 "trtype": "RDMA", 00:19:13.494 "adrfam": "IPv4", 00:19:13.494 "traddr": "192.168.100.8", 00:19:13.494 "trsvcid": "4420" 00:19:13.494 }, 00:19:13.494 "peer_address": { 00:19:13.494 "trtype": "RDMA", 00:19:13.494 "adrfam": "IPv4", 00:19:13.494 "traddr": "192.168.100.8", 00:19:13.494 "trsvcid": "55798" 00:19:13.494 }, 00:19:13.494 "auth": { 00:19:13.494 "state": "completed", 00:19:13.494 "digest": "sha384", 00:19:13.494 "dhgroup": "ffdhe3072" 00:19:13.494 } 00:19:13.494 } 00:19:13.494 ]' 00:19:13.494 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.494 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.494 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.751 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:13.751 10:41:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.751 10:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.751 10:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.751 10:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.751 10:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE1NzE0NGZhNzRlYjcxNTkzNWMxNzRhNmRjOTFiZmYxMTAxOGU5MzE1OThhZmJhZTYxMDkxYTE0MGJkNDdiYlq6dDU=: 00:19:14.686 10:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.686 10:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:14.686 10:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.686 10:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.686 10:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.686 10:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.686 10:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.686 10:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:14.686 10:41:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:14.686 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:14.686 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.686 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:14.686 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:14.686 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:14.686 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.686 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.686 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.686 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.686 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.686 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.686 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.944 00:19:14.945 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.945 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.945 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.203 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.203 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.203 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.203 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.203 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.203 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.203 { 00:19:15.203 "cntlid": 73, 00:19:15.203 "qid": 0, 00:19:15.203 "state": "enabled", 00:19:15.203 "thread": "nvmf_tgt_poll_group_000", 00:19:15.203 "listen_address": { 00:19:15.203 "trtype": "RDMA", 00:19:15.203 "adrfam": "IPv4", 00:19:15.203 "traddr": "192.168.100.8", 00:19:15.203 "trsvcid": "4420" 00:19:15.203 }, 00:19:15.203 "peer_address": { 00:19:15.203 "trtype": "RDMA", 00:19:15.203 "adrfam": "IPv4", 00:19:15.203 "traddr": "192.168.100.8", 00:19:15.203 "trsvcid": "45402" 00:19:15.203 }, 00:19:15.203 "auth": { 00:19:15.203 "state": "completed", 00:19:15.203 "digest": "sha384", 00:19:15.203 "dhgroup": "ffdhe4096" 00:19:15.203 } 00:19:15.203 } 00:19:15.203 ]' 00:19:15.203 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.203 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:15.203 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.461 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:15.462 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.462 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.462 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.462 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.462 10:41:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTA4ODFjYjI0M2JkMDgyZmIzZjBkM2M0MzNjZGM5YTgxYzg3ZmZiN2U0NTcyN2M4RxA3hg==: --dhchap-ctrl-secret DHHC-1:03:MjlkMzQ5NTUzYjNjNDA4NjkzNTYzOTZhNTM4NmExMDlkNTYyMTZjMTBjOTQ4ODZhMGJmOGI1NGMzZTEzNTAwNROJz80=: 00:19:16.397 10:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.397 10:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:16.397 10:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.397 10:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.397 10:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.397 10:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.397 10:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:16.397 10:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:16.397 10:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:16.397 10:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.397 10:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:16.397 10:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:16.397 10:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:16.397 10:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.397 10:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.397 10:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.397 10:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.397 10:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.397 10:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.397 10:41:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.655 00:19:16.655 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.655 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.655 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.913 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.913 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.913 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.913 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.913 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.913 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.913 { 00:19:16.913 "cntlid": 75, 00:19:16.913 "qid": 0, 00:19:16.913 "state": "enabled", 00:19:16.913 "thread": "nvmf_tgt_poll_group_000", 00:19:16.913 "listen_address": { 00:19:16.913 "trtype": "RDMA", 00:19:16.913 "adrfam": "IPv4", 00:19:16.913 "traddr": "192.168.100.8", 00:19:16.913 "trsvcid": "4420" 00:19:16.913 }, 00:19:16.913 "peer_address": { 00:19:16.913 "trtype": "RDMA", 00:19:16.913 "adrfam": "IPv4", 00:19:16.913 "traddr": "192.168.100.8", 00:19:16.913 "trsvcid": "50983" 00:19:16.913 }, 00:19:16.913 "auth": { 00:19:16.913 "state": "completed", 00:19:16.913 "digest": "sha384", 00:19:16.913 "dhgroup": "ffdhe4096" 00:19:16.913 } 00:19:16.913 } 00:19:16.913 ]' 00:19:16.913 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.913 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:16.913 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.170 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:17.170 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.170 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.170 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.170 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.170 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjFlYzQ0YmMzNTNhNjY3MGQzNTljYzdlZWU3NDJjYjD+i/g4: --dhchap-ctrl-secret DHHC-1:02:NTFmMTc2NGJjOTZhOGFiZWU3NWIzNDE5YzhhMzVhMjkyMDNiMzg5YjFmNzA5NjcwMvoMkA==: 00:19:18.104 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.104 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:18.104 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.104 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.104 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.104 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.104 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:18.104 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:18.104 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:18.104 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.104 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:18.104 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:18.104 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:18.104 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.104 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.104 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.104 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.104 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.104 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.104 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.362 00:19:18.362 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.362 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.362 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.624 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.624 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.624 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.624 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.624 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.624 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.624 { 00:19:18.624 "cntlid": 77, 00:19:18.624 "qid": 0, 00:19:18.624 "state": "enabled", 00:19:18.624 "thread": "nvmf_tgt_poll_group_000", 00:19:18.624 "listen_address": { 00:19:18.624 "trtype": "RDMA", 00:19:18.624 "adrfam": "IPv4", 00:19:18.624 "traddr": "192.168.100.8", 00:19:18.624 "trsvcid": "4420" 00:19:18.624 }, 00:19:18.624 "peer_address": { 00:19:18.624 "trtype": "RDMA", 00:19:18.624 "adrfam": "IPv4", 00:19:18.624 "traddr": "192.168.100.8", 00:19:18.624 "trsvcid": "42758" 00:19:18.624 }, 00:19:18.624 "auth": { 00:19:18.624 "state": "completed", 00:19:18.624 "digest": "sha384", 00:19:18.624 "dhgroup": "ffdhe4096" 00:19:18.624 } 00:19:18.625 } 00:19:18.625 ]' 00:19:18.625 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.625 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:18.625 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.625 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:18.625 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.908 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.908 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.908 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.908 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjYzMmM1MzEyNTE1OWNkMGI0MjA3ZDUzOTY4Zjk1YWM0MjljM2Y5ZjNlNDkwZDYwK2jCRA==: --dhchap-ctrl-secret DHHC-1:01:M2U2M2RmODcwODMxOGI3NmQxMmI5M2ZlYjA2MzU3OTZxXvhE: 00:19:19.488 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.746 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:19.746 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.746 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.746 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.746 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.746 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:19.746 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:19.746 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:19.746 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.746 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:19.746 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:19.746 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:19.746 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.746 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:19:19.746 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.746 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.746 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.746 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.746 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.005 00:19:20.263 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.263 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.263 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.263 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.263 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.263 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.263 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.263 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.263 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.263 { 00:19:20.263 "cntlid": 79, 00:19:20.263 "qid": 0, 00:19:20.263 "state": "enabled", 00:19:20.263 "thread": "nvmf_tgt_poll_group_000", 00:19:20.263 "listen_address": { 00:19:20.263 "trtype": "RDMA", 00:19:20.263 "adrfam": "IPv4", 00:19:20.263 "traddr": "192.168.100.8", 00:19:20.263 "trsvcid": "4420" 00:19:20.263 }, 00:19:20.263 "peer_address": { 00:19:20.263 "trtype": "RDMA", 00:19:20.263 "adrfam": "IPv4", 00:19:20.263 "traddr": "192.168.100.8", 00:19:20.263 "trsvcid": "53462" 00:19:20.263 }, 00:19:20.263 "auth": { 00:19:20.263 "state": "completed", 00:19:20.263 "digest": "sha384", 00:19:20.263 "dhgroup": "ffdhe4096" 00:19:20.263 } 00:19:20.263 } 00:19:20.263 ]' 00:19:20.263 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.263 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.263 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.521 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:20.521 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.521 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.521 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.521 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.521 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE1NzE0NGZhNzRlYjcxNTkzNWMxNzRhNmRjOTFiZmYxMTAxOGU5MzE1OThhZmJhZTYxMDkxYTE0MGJkNDdiYlq6dDU=: 00:19:21.455 10:41:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.455 10:41:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:21.455 10:41:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.455 10:41:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.455 10:41:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.455 10:41:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.455 10:41:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.455 10:41:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:21.455 10:41:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:21.455 10:41:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:21.455 10:41:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.455 10:41:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:21.455 10:41:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:21.455 10:41:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:21.456 10:41:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.456 10:41:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.456 10:41:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.456 10:41:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.456 10:41:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.456 10:41:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.456 10:41:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.021 00:19:22.021 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.021 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.021 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.021 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.021 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.021 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.021 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.021 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.021 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.021 { 00:19:22.021 "cntlid": 81, 00:19:22.021 "qid": 0, 00:19:22.021 "state": "enabled", 00:19:22.021 "thread": "nvmf_tgt_poll_group_000", 00:19:22.021 "listen_address": { 00:19:22.021 "trtype": "RDMA", 00:19:22.021 "adrfam": "IPv4", 00:19:22.021 "traddr": "192.168.100.8", 00:19:22.021 "trsvcid": "4420" 00:19:22.022 }, 00:19:22.022 "peer_address": { 00:19:22.022 "trtype": "RDMA", 00:19:22.022 "adrfam": "IPv4", 00:19:22.022 "traddr": "192.168.100.8", 00:19:22.022 "trsvcid": "48282" 00:19:22.022 }, 00:19:22.022 "auth": { 00:19:22.022 "state": "completed", 00:19:22.022 "digest": "sha384", 00:19:22.022 "dhgroup": "ffdhe6144" 00:19:22.022 } 00:19:22.022 } 00:19:22.022 ]' 00:19:22.022 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.022 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:22.022 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.279 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:22.279 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.279 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.279 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.279 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.279 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTA4ODFjYjI0M2JkMDgyZmIzZjBkM2M0MzNjZGM5YTgxYzg3ZmZiN2U0NTcyN2M4RxA3hg==: --dhchap-ctrl-secret DHHC-1:03:MjlkMzQ5NTUzYjNjNDA4NjkzNTYzOTZhNTM4NmExMDlkNTYyMTZjMTBjOTQ4ODZhMGJmOGI1NGMzZTEzNTAwNROJz80=: 00:19:23.213 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.213 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:23.213 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.213 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.213 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.213 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.213 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:23.213 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:23.213 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:23.213 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.213 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:23.213 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:23.213 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:23.213 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.213 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.213 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.213 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.213 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.213 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.213 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.778 00:19:23.778 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.778 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.778 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.778 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.778 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.778 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.778 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.778 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.778 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.778 { 00:19:23.778 "cntlid": 83, 00:19:23.778 "qid": 0, 00:19:23.778 "state": "enabled", 00:19:23.778 "thread": "nvmf_tgt_poll_group_000", 00:19:23.778 "listen_address": { 00:19:23.778 "trtype": "RDMA", 00:19:23.778 "adrfam": "IPv4", 00:19:23.778 "traddr": "192.168.100.8", 00:19:23.778 "trsvcid": "4420" 00:19:23.778 }, 00:19:23.778 "peer_address": { 00:19:23.778 "trtype": "RDMA", 00:19:23.778 "adrfam": "IPv4", 00:19:23.778 "traddr": "192.168.100.8", 00:19:23.778 "trsvcid": "50567" 00:19:23.778 }, 00:19:23.778 "auth": { 00:19:23.778 "state": "completed", 00:19:23.778 "digest": "sha384", 00:19:23.778 "dhgroup": "ffdhe6144" 00:19:23.778 } 00:19:23.778 } 00:19:23.778 ]' 00:19:23.778 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.778 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:23.778 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.037 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:24.037 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.037 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.037 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.037 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.037 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjFlYzQ0YmMzNTNhNjY3MGQzNTljYzdlZWU3NDJjYjD+i/g4: --dhchap-ctrl-secret DHHC-1:02:NTFmMTc2NGJjOTZhOGFiZWU3NWIzNDE5YzhhMzVhMjkyMDNiMzg5YjFmNzA5NjcwMvoMkA==: 00:19:24.602 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.860 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:24.860 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.860 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.860 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.860 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.860 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:24.860 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:25.118 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:25.118 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.118 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:25.118 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:25.118 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:25.118 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.118 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.118 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.118 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.118 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.118 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.118 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.376 00:19:25.376 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.376 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.376 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.634 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.634 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.634 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.634 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.634 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.634 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.634 { 00:19:25.634 "cntlid": 85, 00:19:25.634 "qid": 0, 00:19:25.634 "state": "enabled", 00:19:25.634 "thread": "nvmf_tgt_poll_group_000", 00:19:25.634 "listen_address": { 00:19:25.634 "trtype": "RDMA", 00:19:25.634 "adrfam": "IPv4", 00:19:25.634 "traddr": "192.168.100.8", 00:19:25.634 "trsvcid": "4420" 00:19:25.634 }, 00:19:25.634 "peer_address": { 00:19:25.634 "trtype": "RDMA", 00:19:25.634 "adrfam": "IPv4", 00:19:25.634 "traddr": "192.168.100.8", 00:19:25.634 "trsvcid": "40609" 00:19:25.634 }, 00:19:25.634 "auth": { 00:19:25.634 "state": "completed", 00:19:25.634 "digest": "sha384", 00:19:25.634 "dhgroup": "ffdhe6144" 00:19:25.634 } 00:19:25.634 } 00:19:25.634 ]' 00:19:25.634 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.634 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:25.634 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.634 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:25.634 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.634 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.634 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.634 10:41:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.892 10:41:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjYzMmM1MzEyNTE1OWNkMGI0MjA3ZDUzOTY4Zjk1YWM0MjljM2Y5ZjNlNDkwZDYwK2jCRA==: --dhchap-ctrl-secret DHHC-1:01:M2U2M2RmODcwODMxOGI3NmQxMmI5M2ZlYjA2MzU3OTZxXvhE: 00:19:26.456 10:41:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.456 10:41:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:26.456 10:41:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.456 10:41:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.714 10:41:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.714 10:41:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.714 10:41:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:26.714 10:41:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:26.714 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:26.714 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.714 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:26.714 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:26.714 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:26.714 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.714 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:19:26.714 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.714 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.714 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.714 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.714 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.972 00:19:27.231 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.231 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.231 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.231 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.231 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.231 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.231 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.231 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.231 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.231 { 00:19:27.231 "cntlid": 87, 00:19:27.231 "qid": 0, 00:19:27.231 "state": "enabled", 00:19:27.231 "thread": "nvmf_tgt_poll_group_000", 00:19:27.231 "listen_address": { 00:19:27.231 "trtype": "RDMA", 00:19:27.231 "adrfam": "IPv4", 00:19:27.231 "traddr": "192.168.100.8", 00:19:27.231 "trsvcid": "4420" 00:19:27.231 }, 00:19:27.231 "peer_address": { 00:19:27.231 "trtype": "RDMA", 00:19:27.231 "adrfam": "IPv4", 00:19:27.231 "traddr": "192.168.100.8", 00:19:27.231 "trsvcid": "51685" 00:19:27.231 }, 00:19:27.231 "auth": { 00:19:27.231 "state": "completed", 00:19:27.231 "digest": "sha384", 00:19:27.231 "dhgroup": "ffdhe6144" 00:19:27.231 } 00:19:27.231 } 00:19:27.231 ]' 00:19:27.231 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.231 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:27.231 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.489 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:27.489 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.489 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.489 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.489 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.490 10:41:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE1NzE0NGZhNzRlYjcxNTkzNWMxNzRhNmRjOTFiZmYxMTAxOGU5MzE1OThhZmJhZTYxMDkxYTE0MGJkNDdiYlq6dDU=: 00:19:28.424 10:41:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.424 10:41:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:28.424 10:41:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.424 10:41:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.424 10:41:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.424 10:41:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:28.424 10:41:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.424 10:41:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:28.424 10:41:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:28.424 10:41:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:28.424 10:41:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.424 10:41:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:28.424 10:41:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:28.424 10:41:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:28.424 10:41:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.424 10:41:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.424 10:41:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.424 10:41:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.424 10:41:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.424 10:41:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.424 10:41:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.990 00:19:28.990 10:41:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.990 10:41:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.990 10:41:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.248 10:41:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.248 10:41:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.249 10:41:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.249 10:41:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.249 10:41:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.249 10:41:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.249 { 00:19:29.249 "cntlid": 89, 00:19:29.249 "qid": 0, 00:19:29.249 "state": "enabled", 00:19:29.249 "thread": "nvmf_tgt_poll_group_000", 00:19:29.249 "listen_address": { 00:19:29.249 "trtype": "RDMA", 00:19:29.249 "adrfam": "IPv4", 00:19:29.249 "traddr": "192.168.100.8", 00:19:29.249 "trsvcid": "4420" 00:19:29.249 }, 00:19:29.249 "peer_address": { 00:19:29.249 "trtype": "RDMA", 00:19:29.249 "adrfam": "IPv4", 00:19:29.249 "traddr": "192.168.100.8", 00:19:29.249 "trsvcid": "48650" 00:19:29.249 }, 00:19:29.249 "auth": { 00:19:29.249 "state": "completed", 00:19:29.249 "digest": "sha384", 00:19:29.249 "dhgroup": "ffdhe8192" 00:19:29.249 } 00:19:29.249 } 00:19:29.249 ]' 00:19:29.249 10:41:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.249 10:41:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:29.249 10:41:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.249 10:41:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:29.249 10:41:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.249 10:41:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.249 10:41:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.249 10:41:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.507 10:41:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTA4ODFjYjI0M2JkMDgyZmIzZjBkM2M0MzNjZGM5YTgxYzg3ZmZiN2U0NTcyN2M4RxA3hg==: --dhchap-ctrl-secret DHHC-1:03:MjlkMzQ5NTUzYjNjNDA4NjkzNTYzOTZhNTM4NmExMDlkNTYyMTZjMTBjOTQ4ODZhMGJmOGI1NGMzZTEzNTAwNROJz80=: 00:19:30.073 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.331 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:30.331 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.331 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.331 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.331 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.331 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:30.331 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:30.331 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:30.331 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.331 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:30.331 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:30.331 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:30.331 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.331 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.331 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.331 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.331 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.331 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.331 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.897 00:19:30.897 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.897 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.897 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.155 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.155 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.155 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.155 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.155 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.155 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.155 { 00:19:31.155 "cntlid": 91, 00:19:31.155 "qid": 0, 00:19:31.155 "state": "enabled", 00:19:31.155 "thread": "nvmf_tgt_poll_group_000", 00:19:31.155 "listen_address": { 00:19:31.155 "trtype": "RDMA", 00:19:31.155 "adrfam": "IPv4", 00:19:31.155 "traddr": "192.168.100.8", 00:19:31.155 "trsvcid": "4420" 00:19:31.155 }, 00:19:31.155 "peer_address": { 00:19:31.155 "trtype": "RDMA", 00:19:31.155 "adrfam": "IPv4", 00:19:31.155 "traddr": "192.168.100.8", 00:19:31.155 "trsvcid": "49282" 00:19:31.155 }, 00:19:31.155 "auth": { 00:19:31.155 "state": "completed", 00:19:31.155 "digest": "sha384", 00:19:31.155 "dhgroup": "ffdhe8192" 00:19:31.155 } 00:19:31.155 } 00:19:31.155 ]' 00:19:31.155 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.155 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.155 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.155 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:31.155 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.155 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.155 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.155 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.413 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjFlYzQ0YmMzNTNhNjY3MGQzNTljYzdlZWU3NDJjYjD+i/g4: --dhchap-ctrl-secret DHHC-1:02:NTFmMTc2NGJjOTZhOGFiZWU3NWIzNDE5YzhhMzVhMjkyMDNiMzg5YjFmNzA5NjcwMvoMkA==: 00:19:31.980 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.980 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:31.980 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.980 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.980 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.980 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.980 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:31.980 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:32.238 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:32.238 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.238 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:32.238 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:32.238 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:32.238 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.238 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.238 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.238 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.238 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.238 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.238 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.805 00:19:32.805 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.805 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.805 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.805 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.805 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.805 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.805 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.805 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.805 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.805 { 00:19:32.805 "cntlid": 93, 00:19:32.805 "qid": 0, 00:19:32.805 "state": "enabled", 00:19:32.805 "thread": "nvmf_tgt_poll_group_000", 00:19:32.805 "listen_address": { 00:19:32.805 "trtype": "RDMA", 00:19:32.805 "adrfam": "IPv4", 00:19:32.805 "traddr": "192.168.100.8", 00:19:32.805 "trsvcid": "4420" 00:19:32.805 }, 00:19:32.805 "peer_address": { 00:19:32.805 "trtype": "RDMA", 00:19:32.805 "adrfam": "IPv4", 00:19:32.805 "traddr": "192.168.100.8", 00:19:32.805 "trsvcid": "47588" 00:19:32.805 }, 00:19:32.805 "auth": { 00:19:32.805 "state": "completed", 00:19:32.805 "digest": "sha384", 00:19:32.805 "dhgroup": "ffdhe8192" 00:19:32.805 } 00:19:32.805 } 00:19:32.805 ]' 00:19:32.805 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.063 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.063 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.063 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:33.063 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.063 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.063 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.063 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.321 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjYzMmM1MzEyNTE1OWNkMGI0MjA3ZDUzOTY4Zjk1YWM0MjljM2Y5ZjNlNDkwZDYwK2jCRA==: --dhchap-ctrl-secret DHHC-1:01:M2U2M2RmODcwODMxOGI3NmQxMmI5M2ZlYjA2MzU3OTZxXvhE: 00:19:33.887 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.887 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:33.887 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.887 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.887 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.887 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.887 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:33.887 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:34.144 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:34.144 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.144 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:34.145 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:34.145 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:34.145 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.145 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:19:34.145 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.145 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.145 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.145 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.145 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.711 00:19:34.711 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.711 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.711 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.711 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.711 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.711 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.711 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.711 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.711 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.711 { 00:19:34.711 "cntlid": 95, 00:19:34.711 "qid": 0, 00:19:34.711 "state": "enabled", 00:19:34.711 "thread": "nvmf_tgt_poll_group_000", 00:19:34.711 "listen_address": { 00:19:34.711 "trtype": "RDMA", 00:19:34.711 "adrfam": "IPv4", 00:19:34.711 "traddr": "192.168.100.8", 00:19:34.711 "trsvcid": "4420" 00:19:34.711 }, 00:19:34.711 "peer_address": { 00:19:34.711 "trtype": "RDMA", 00:19:34.711 "adrfam": "IPv4", 00:19:34.711 "traddr": "192.168.100.8", 00:19:34.711 "trsvcid": "39104" 00:19:34.711 }, 00:19:34.711 "auth": { 00:19:34.711 "state": "completed", 00:19:34.711 "digest": "sha384", 00:19:34.711 "dhgroup": "ffdhe8192" 00:19:34.711 } 00:19:34.711 } 00:19:34.711 ]' 00:19:34.711 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.970 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.970 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.970 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:34.970 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.970 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.970 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.970 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.227 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE1NzE0NGZhNzRlYjcxNTkzNWMxNzRhNmRjOTFiZmYxMTAxOGU5MzE1OThhZmJhZTYxMDkxYTE0MGJkNDdiYlq6dDU=: 00:19:35.792 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.792 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:35.792 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.792 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.792 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.792 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:35.792 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.792 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.792 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:35.792 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:36.053 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:36.053 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.053 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:36.053 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:36.053 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:36.053 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.053 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.053 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.053 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.053 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.053 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.053 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.314 00:19:36.314 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.314 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.314 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.572 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.572 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.572 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.572 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.572 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.572 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.572 { 00:19:36.572 "cntlid": 97, 00:19:36.572 "qid": 0, 00:19:36.572 "state": "enabled", 00:19:36.572 "thread": "nvmf_tgt_poll_group_000", 00:19:36.572 "listen_address": { 00:19:36.572 "trtype": "RDMA", 00:19:36.572 "adrfam": "IPv4", 00:19:36.572 "traddr": "192.168.100.8", 00:19:36.572 "trsvcid": "4420" 00:19:36.572 }, 00:19:36.572 "peer_address": { 00:19:36.572 "trtype": "RDMA", 00:19:36.572 "adrfam": "IPv4", 00:19:36.572 "traddr": "192.168.100.8", 00:19:36.572 "trsvcid": "60669" 00:19:36.572 }, 00:19:36.572 "auth": { 00:19:36.572 "state": "completed", 00:19:36.572 "digest": "sha512", 00:19:36.572 "dhgroup": "null" 00:19:36.572 } 00:19:36.572 } 00:19:36.572 ]' 00:19:36.572 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.572 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.572 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.572 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:36.572 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.572 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.572 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.572 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.830 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTA4ODFjYjI0M2JkMDgyZmIzZjBkM2M0MzNjZGM5YTgxYzg3ZmZiN2U0NTcyN2M4RxA3hg==: --dhchap-ctrl-secret DHHC-1:03:MjlkMzQ5NTUzYjNjNDA4NjkzNTYzOTZhNTM4NmExMDlkNTYyMTZjMTBjOTQ4ODZhMGJmOGI1NGMzZTEzNTAwNROJz80=: 00:19:37.422 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.422 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:37.422 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.422 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.423 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.423 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.423 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:37.423 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:37.694 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:37.694 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.694 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:37.694 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:37.694 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:37.694 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.694 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.694 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.694 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.694 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.694 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.694 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.951 00:19:37.951 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.951 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.951 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.211 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.211 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.211 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.211 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.211 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.211 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.211 { 00:19:38.211 "cntlid": 99, 00:19:38.211 "qid": 0, 00:19:38.211 "state": "enabled", 00:19:38.211 "thread": "nvmf_tgt_poll_group_000", 00:19:38.211 "listen_address": { 00:19:38.211 "trtype": "RDMA", 00:19:38.211 "adrfam": "IPv4", 00:19:38.211 "traddr": "192.168.100.8", 00:19:38.211 "trsvcid": "4420" 00:19:38.211 }, 00:19:38.211 "peer_address": { 00:19:38.211 "trtype": "RDMA", 00:19:38.211 "adrfam": "IPv4", 00:19:38.211 "traddr": "192.168.100.8", 00:19:38.211 "trsvcid": "39198" 00:19:38.211 }, 00:19:38.211 "auth": { 00:19:38.211 "state": "completed", 00:19:38.211 "digest": "sha512", 00:19:38.211 "dhgroup": "null" 00:19:38.211 } 00:19:38.211 } 00:19:38.211 ]' 00:19:38.211 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.211 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.211 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.211 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:38.211 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.212 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.212 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.212 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.470 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjFlYzQ0YmMzNTNhNjY3MGQzNTljYzdlZWU3NDJjYjD+i/g4: --dhchap-ctrl-secret DHHC-1:02:NTFmMTc2NGJjOTZhOGFiZWU3NWIzNDE5YzhhMzVhMjkyMDNiMzg5YjFmNzA5NjcwMvoMkA==: 00:19:39.038 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.038 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:39.038 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.038 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.038 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.038 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.038 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:39.038 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:39.297 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:39.297 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.297 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:39.297 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:39.297 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:39.297 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.297 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.297 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.297 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.297 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.297 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.297 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.556 00:19:39.556 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.556 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.556 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.815 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.815 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.815 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.815 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.815 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.815 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.815 { 00:19:39.815 "cntlid": 101, 00:19:39.815 "qid": 0, 00:19:39.815 "state": "enabled", 00:19:39.815 "thread": "nvmf_tgt_poll_group_000", 00:19:39.815 "listen_address": { 00:19:39.815 "trtype": "RDMA", 00:19:39.815 "adrfam": "IPv4", 00:19:39.815 "traddr": "192.168.100.8", 00:19:39.815 "trsvcid": "4420" 00:19:39.815 }, 00:19:39.815 "peer_address": { 00:19:39.815 "trtype": "RDMA", 00:19:39.815 "adrfam": "IPv4", 00:19:39.815 "traddr": "192.168.100.8", 00:19:39.815 "trsvcid": "53344" 00:19:39.815 }, 00:19:39.815 "auth": { 00:19:39.815 "state": "completed", 00:19:39.815 "digest": "sha512", 00:19:39.815 "dhgroup": "null" 00:19:39.815 } 00:19:39.815 } 00:19:39.815 ]' 00:19:39.815 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.815 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.815 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.815 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:39.815 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.815 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.815 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.815 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.073 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjYzMmM1MzEyNTE1OWNkMGI0MjA3ZDUzOTY4Zjk1YWM0MjljM2Y5ZjNlNDkwZDYwK2jCRA==: --dhchap-ctrl-secret DHHC-1:01:M2U2M2RmODcwODMxOGI3NmQxMmI5M2ZlYjA2MzU3OTZxXvhE: 00:19:40.640 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.640 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:40.640 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.640 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.640 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.640 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.640 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:40.640 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:40.899 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:40.899 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.899 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:40.899 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:40.899 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:40.899 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.899 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:19:40.899 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.899 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.899 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.899 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.899 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:41.158 00:19:41.158 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.158 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.158 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.416 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.416 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.416 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.416 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.417 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.417 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.417 { 00:19:41.417 "cntlid": 103, 00:19:41.417 "qid": 0, 00:19:41.417 "state": "enabled", 00:19:41.417 "thread": "nvmf_tgt_poll_group_000", 00:19:41.417 "listen_address": { 00:19:41.417 "trtype": "RDMA", 00:19:41.417 "adrfam": "IPv4", 00:19:41.417 "traddr": "192.168.100.8", 00:19:41.417 "trsvcid": "4420" 00:19:41.417 }, 00:19:41.417 "peer_address": { 00:19:41.417 "trtype": "RDMA", 00:19:41.417 "adrfam": "IPv4", 00:19:41.417 "traddr": "192.168.100.8", 00:19:41.417 "trsvcid": "53520" 00:19:41.417 }, 00:19:41.417 "auth": { 00:19:41.417 "state": "completed", 00:19:41.417 "digest": "sha512", 00:19:41.417 "dhgroup": "null" 00:19:41.417 } 00:19:41.417 } 00:19:41.417 ]' 00:19:41.417 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.417 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.417 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.417 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:41.417 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.417 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.417 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.417 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.675 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE1NzE0NGZhNzRlYjcxNTkzNWMxNzRhNmRjOTFiZmYxMTAxOGU5MzE1OThhZmJhZTYxMDkxYTE0MGJkNDdiYlq6dDU=: 00:19:42.242 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.242 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:42.242 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.242 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.242 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.242 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.242 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.242 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:42.242 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:42.501 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:42.501 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.501 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:42.501 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:42.501 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:42.501 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.501 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.501 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.501 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.501 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.501 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.501 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.759 00:19:42.759 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.759 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.759 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.017 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.017 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.017 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.017 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.017 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.017 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.017 { 00:19:43.017 "cntlid": 105, 00:19:43.017 "qid": 0, 00:19:43.017 "state": "enabled", 00:19:43.017 "thread": "nvmf_tgt_poll_group_000", 00:19:43.017 "listen_address": { 00:19:43.017 "trtype": "RDMA", 00:19:43.017 "adrfam": "IPv4", 00:19:43.017 "traddr": "192.168.100.8", 00:19:43.017 "trsvcid": "4420" 00:19:43.017 }, 00:19:43.017 "peer_address": { 00:19:43.017 "trtype": "RDMA", 00:19:43.017 "adrfam": "IPv4", 00:19:43.017 "traddr": "192.168.100.8", 00:19:43.017 "trsvcid": "47164" 00:19:43.017 }, 00:19:43.017 "auth": { 00:19:43.017 "state": "completed", 00:19:43.018 "digest": "sha512", 00:19:43.018 "dhgroup": "ffdhe2048" 00:19:43.018 } 00:19:43.018 } 00:19:43.018 ]' 00:19:43.018 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.018 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.018 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.018 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:43.018 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.018 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.018 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.018 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.276 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTA4ODFjYjI0M2JkMDgyZmIzZjBkM2M0MzNjZGM5YTgxYzg3ZmZiN2U0NTcyN2M4RxA3hg==: --dhchap-ctrl-secret DHHC-1:03:MjlkMzQ5NTUzYjNjNDA4NjkzNTYzOTZhNTM4NmExMDlkNTYyMTZjMTBjOTQ4ODZhMGJmOGI1NGMzZTEzNTAwNROJz80=: 00:19:43.843 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.102 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:44.102 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.102 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.102 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.102 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.102 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:44.102 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:44.102 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:44.102 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.102 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:44.102 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:44.102 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:44.102 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.102 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.102 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.102 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.102 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.102 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.102 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.360 00:19:44.360 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.360 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.360 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.618 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.618 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.618 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.618 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.618 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.618 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.618 { 00:19:44.618 "cntlid": 107, 00:19:44.618 "qid": 0, 00:19:44.618 "state": "enabled", 00:19:44.618 "thread": "nvmf_tgt_poll_group_000", 00:19:44.618 "listen_address": { 00:19:44.618 "trtype": "RDMA", 00:19:44.618 "adrfam": "IPv4", 00:19:44.618 "traddr": "192.168.100.8", 00:19:44.618 "trsvcid": "4420" 00:19:44.618 }, 00:19:44.618 "peer_address": { 00:19:44.618 "trtype": "RDMA", 00:19:44.618 "adrfam": "IPv4", 00:19:44.618 "traddr": "192.168.100.8", 00:19:44.618 "trsvcid": "57288" 00:19:44.618 }, 00:19:44.618 "auth": { 00:19:44.618 "state": "completed", 00:19:44.618 "digest": "sha512", 00:19:44.618 "dhgroup": "ffdhe2048" 00:19:44.618 } 00:19:44.618 } 00:19:44.618 ]' 00:19:44.618 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.618 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.618 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.618 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:44.618 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.618 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.618 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.618 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.877 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjFlYzQ0YmMzNTNhNjY3MGQzNTljYzdlZWU3NDJjYjD+i/g4: --dhchap-ctrl-secret DHHC-1:02:NTFmMTc2NGJjOTZhOGFiZWU3NWIzNDE5YzhhMzVhMjkyMDNiMzg5YjFmNzA5NjcwMvoMkA==: 00:19:45.444 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.702 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:45.702 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.702 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.702 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.702 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.702 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:45.702 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:45.960 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:45.960 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.960 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:45.960 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:45.960 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:45.960 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.960 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.960 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.960 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.960 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.960 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.960 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.960 00:19:46.218 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.218 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.218 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.218 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.218 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.218 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.218 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.218 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.218 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.218 { 00:19:46.218 "cntlid": 109, 00:19:46.218 "qid": 0, 00:19:46.218 "state": "enabled", 00:19:46.218 "thread": "nvmf_tgt_poll_group_000", 00:19:46.218 "listen_address": { 00:19:46.218 "trtype": "RDMA", 00:19:46.218 "adrfam": "IPv4", 00:19:46.218 "traddr": "192.168.100.8", 00:19:46.218 "trsvcid": "4420" 00:19:46.218 }, 00:19:46.218 "peer_address": { 00:19:46.218 "trtype": "RDMA", 00:19:46.218 "adrfam": "IPv4", 00:19:46.218 "traddr": "192.168.100.8", 00:19:46.218 "trsvcid": "57168" 00:19:46.218 }, 00:19:46.218 "auth": { 00:19:46.218 "state": "completed", 00:19:46.218 "digest": "sha512", 00:19:46.218 "dhgroup": "ffdhe2048" 00:19:46.218 } 00:19:46.218 } 00:19:46.218 ]' 00:19:46.218 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.218 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.218 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.476 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:46.476 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.476 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.476 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.476 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.476 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjYzMmM1MzEyNTE1OWNkMGI0MjA3ZDUzOTY4Zjk1YWM0MjljM2Y5ZjNlNDkwZDYwK2jCRA==: --dhchap-ctrl-secret DHHC-1:01:M2U2M2RmODcwODMxOGI3NmQxMmI5M2ZlYjA2MzU3OTZxXvhE: 00:19:47.412 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.412 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:47.412 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.412 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.412 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.412 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.412 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:47.412 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:47.412 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:47.412 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.412 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:47.412 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:47.412 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:47.412 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.412 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:19:47.412 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.412 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.412 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.412 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.412 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.671 00:19:47.671 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.671 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.671 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.930 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.930 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.930 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.930 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.930 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.930 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.930 { 00:19:47.930 "cntlid": 111, 00:19:47.930 "qid": 0, 00:19:47.930 "state": "enabled", 00:19:47.930 "thread": "nvmf_tgt_poll_group_000", 00:19:47.930 "listen_address": { 00:19:47.930 "trtype": "RDMA", 00:19:47.930 "adrfam": "IPv4", 00:19:47.930 "traddr": "192.168.100.8", 00:19:47.930 "trsvcid": "4420" 00:19:47.930 }, 00:19:47.930 "peer_address": { 00:19:47.930 "trtype": "RDMA", 00:19:47.930 "adrfam": "IPv4", 00:19:47.930 "traddr": "192.168.100.8", 00:19:47.930 "trsvcid": "59109" 00:19:47.930 }, 00:19:47.930 "auth": { 00:19:47.930 "state": "completed", 00:19:47.930 "digest": "sha512", 00:19:47.930 "dhgroup": "ffdhe2048" 00:19:47.930 } 00:19:47.930 } 00:19:47.930 ]' 00:19:47.930 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.930 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.930 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.930 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:47.930 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.190 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.190 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.190 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.190 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE1NzE0NGZhNzRlYjcxNTkzNWMxNzRhNmRjOTFiZmYxMTAxOGU5MzE1OThhZmJhZTYxMDkxYTE0MGJkNDdiYlq6dDU=: 00:19:48.779 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.038 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:49.038 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.038 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.038 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.038 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.038 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.038 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:49.038 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:49.038 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:49.038 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.038 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:49.038 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:49.038 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:49.038 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.038 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.038 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.038 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.038 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.038 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.038 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.296 00:19:49.296 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.296 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.296 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.555 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.555 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.555 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.555 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.555 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.555 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.555 { 00:19:49.555 "cntlid": 113, 00:19:49.555 "qid": 0, 00:19:49.555 "state": "enabled", 00:19:49.555 "thread": "nvmf_tgt_poll_group_000", 00:19:49.555 "listen_address": { 00:19:49.555 "trtype": "RDMA", 00:19:49.555 "adrfam": "IPv4", 00:19:49.555 "traddr": "192.168.100.8", 00:19:49.555 "trsvcid": "4420" 00:19:49.555 }, 00:19:49.555 "peer_address": { 00:19:49.555 "trtype": "RDMA", 00:19:49.555 "adrfam": "IPv4", 00:19:49.555 "traddr": "192.168.100.8", 00:19:49.555 "trsvcid": "46811" 00:19:49.555 }, 00:19:49.555 "auth": { 00:19:49.555 "state": "completed", 00:19:49.555 "digest": "sha512", 00:19:49.555 "dhgroup": "ffdhe3072" 00:19:49.555 } 00:19:49.555 } 00:19:49.555 ]' 00:19:49.555 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.555 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.555 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.555 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:49.555 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.814 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.814 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.814 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.814 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTA4ODFjYjI0M2JkMDgyZmIzZjBkM2M0MzNjZGM5YTgxYzg3ZmZiN2U0NTcyN2M4RxA3hg==: --dhchap-ctrl-secret DHHC-1:03:MjlkMzQ5NTUzYjNjNDA4NjkzNTYzOTZhNTM4NmExMDlkNTYyMTZjMTBjOTQ4ODZhMGJmOGI1NGMzZTEzNTAwNROJz80=: 00:19:50.382 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.641 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:50.641 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.641 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.641 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.641 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.641 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:50.641 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:50.900 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:50.900 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.900 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:50.900 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:50.900 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:50.900 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.900 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.900 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.900 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.900 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.900 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.900 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.160 00:19:51.160 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.160 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.160 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.160 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.160 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.160 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.160 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.160 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.160 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.160 { 00:19:51.160 "cntlid": 115, 00:19:51.160 "qid": 0, 00:19:51.160 "state": "enabled", 00:19:51.160 "thread": "nvmf_tgt_poll_group_000", 00:19:51.160 "listen_address": { 00:19:51.160 "trtype": "RDMA", 00:19:51.160 "adrfam": "IPv4", 00:19:51.160 "traddr": "192.168.100.8", 00:19:51.160 "trsvcid": "4420" 00:19:51.160 }, 00:19:51.160 "peer_address": { 00:19:51.160 "trtype": "RDMA", 00:19:51.160 "adrfam": "IPv4", 00:19:51.160 "traddr": "192.168.100.8", 00:19:51.160 "trsvcid": "32810" 00:19:51.160 }, 00:19:51.160 "auth": { 00:19:51.160 "state": "completed", 00:19:51.160 "digest": "sha512", 00:19:51.160 "dhgroup": "ffdhe3072" 00:19:51.160 } 00:19:51.160 } 00:19:51.160 ]' 00:19:51.160 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.160 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.419 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.419 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:51.419 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.419 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.419 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.419 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.419 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjFlYzQ0YmMzNTNhNjY3MGQzNTljYzdlZWU3NDJjYjD+i/g4: --dhchap-ctrl-secret DHHC-1:02:NTFmMTc2NGJjOTZhOGFiZWU3NWIzNDE5YzhhMzVhMjkyMDNiMzg5YjFmNzA5NjcwMvoMkA==: 00:19:52.356 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.356 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:52.356 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.356 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.356 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.356 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.356 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:52.356 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:52.356 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:52.356 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.356 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:52.356 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:52.356 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:52.356 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.356 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.356 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.356 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.356 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.356 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.356 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.615 00:19:52.615 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.615 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.615 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.874 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.874 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.874 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.874 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.874 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.874 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.874 { 00:19:52.874 "cntlid": 117, 00:19:52.874 "qid": 0, 00:19:52.874 "state": "enabled", 00:19:52.874 "thread": "nvmf_tgt_poll_group_000", 00:19:52.874 "listen_address": { 00:19:52.874 "trtype": "RDMA", 00:19:52.874 "adrfam": "IPv4", 00:19:52.874 "traddr": "192.168.100.8", 00:19:52.874 "trsvcid": "4420" 00:19:52.874 }, 00:19:52.874 "peer_address": { 00:19:52.874 "trtype": "RDMA", 00:19:52.874 "adrfam": "IPv4", 00:19:52.874 "traddr": "192.168.100.8", 00:19:52.874 "trsvcid": "59178" 00:19:52.874 }, 00:19:52.874 "auth": { 00:19:52.874 "state": "completed", 00:19:52.874 "digest": "sha512", 00:19:52.874 "dhgroup": "ffdhe3072" 00:19:52.874 } 00:19:52.874 } 00:19:52.874 ]' 00:19:52.874 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.874 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.874 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.874 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:52.874 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.133 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.133 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.133 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.133 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjYzMmM1MzEyNTE1OWNkMGI0MjA3ZDUzOTY4Zjk1YWM0MjljM2Y5ZjNlNDkwZDYwK2jCRA==: --dhchap-ctrl-secret DHHC-1:01:M2U2M2RmODcwODMxOGI3NmQxMmI5M2ZlYjA2MzU3OTZxXvhE: 00:19:54.067 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.067 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:54.067 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.067 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.067 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.067 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.067 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:54.067 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:54.067 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:54.067 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.067 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:54.067 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:54.067 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:54.067 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.067 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:19:54.067 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.067 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.067 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.067 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.067 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.325 00:19:54.325 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.325 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.325 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.584 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.584 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.584 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.584 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.584 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.584 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.584 { 00:19:54.584 "cntlid": 119, 00:19:54.584 "qid": 0, 00:19:54.584 "state": "enabled", 00:19:54.584 "thread": "nvmf_tgt_poll_group_000", 00:19:54.584 "listen_address": { 00:19:54.584 "trtype": "RDMA", 00:19:54.584 "adrfam": "IPv4", 00:19:54.584 "traddr": "192.168.100.8", 00:19:54.584 "trsvcid": "4420" 00:19:54.584 }, 00:19:54.584 "peer_address": { 00:19:54.584 "trtype": "RDMA", 00:19:54.584 "adrfam": "IPv4", 00:19:54.584 "traddr": "192.168.100.8", 00:19:54.584 "trsvcid": "50177" 00:19:54.584 }, 00:19:54.584 "auth": { 00:19:54.584 "state": "completed", 00:19:54.584 "digest": "sha512", 00:19:54.584 "dhgroup": "ffdhe3072" 00:19:54.584 } 00:19:54.584 } 00:19:54.584 ]' 00:19:54.584 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.584 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.584 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.584 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:54.584 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.584 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.584 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.584 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.843 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE1NzE0NGZhNzRlYjcxNTkzNWMxNzRhNmRjOTFiZmYxMTAxOGU5MzE1OThhZmJhZTYxMDkxYTE0MGJkNDdiYlq6dDU=: 00:19:55.409 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.667 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:55.667 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.667 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.667 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.667 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.667 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.667 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:55.667 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:55.925 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:55.925 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.925 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:55.925 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:55.925 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:55.925 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.925 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.925 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.925 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.925 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.925 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.925 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.183 00:19:56.183 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.183 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.183 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.183 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.183 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.183 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.183 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.183 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.183 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.183 { 00:19:56.183 "cntlid": 121, 00:19:56.183 "qid": 0, 00:19:56.183 "state": "enabled", 00:19:56.183 "thread": "nvmf_tgt_poll_group_000", 00:19:56.183 "listen_address": { 00:19:56.183 "trtype": "RDMA", 00:19:56.183 "adrfam": "IPv4", 00:19:56.183 "traddr": "192.168.100.8", 00:19:56.183 "trsvcid": "4420" 00:19:56.183 }, 00:19:56.183 "peer_address": { 00:19:56.183 "trtype": "RDMA", 00:19:56.183 "adrfam": "IPv4", 00:19:56.183 "traddr": "192.168.100.8", 00:19:56.183 "trsvcid": "38123" 00:19:56.183 }, 00:19:56.183 "auth": { 00:19:56.183 "state": "completed", 00:19:56.183 "digest": "sha512", 00:19:56.183 "dhgroup": "ffdhe4096" 00:19:56.183 } 00:19:56.183 } 00:19:56.183 ]' 00:19:56.183 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.460 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.460 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.460 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:56.460 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.460 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.460 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.460 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.759 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTA4ODFjYjI0M2JkMDgyZmIzZjBkM2M0MzNjZGM5YTgxYzg3ZmZiN2U0NTcyN2M4RxA3hg==: --dhchap-ctrl-secret DHHC-1:03:MjlkMzQ5NTUzYjNjNDA4NjkzNTYzOTZhNTM4NmExMDlkNTYyMTZjMTBjOTQ4ODZhMGJmOGI1NGMzZTEzNTAwNROJz80=: 00:19:57.327 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.327 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:57.327 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.327 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.327 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.327 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.327 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:57.327 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:57.586 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:57.586 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.586 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:57.586 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:57.586 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:57.586 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.586 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.586 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.586 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.586 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.586 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.586 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.845 00:19:57.845 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.845 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.845 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.103 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.103 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.103 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.103 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.103 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.103 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.103 { 00:19:58.103 "cntlid": 123, 00:19:58.103 "qid": 0, 00:19:58.103 "state": "enabled", 00:19:58.103 "thread": "nvmf_tgt_poll_group_000", 00:19:58.103 "listen_address": { 00:19:58.103 "trtype": "RDMA", 00:19:58.103 "adrfam": "IPv4", 00:19:58.103 "traddr": "192.168.100.8", 00:19:58.103 "trsvcid": "4420" 00:19:58.103 }, 00:19:58.103 "peer_address": { 00:19:58.103 "trtype": "RDMA", 00:19:58.103 "adrfam": "IPv4", 00:19:58.103 "traddr": "192.168.100.8", 00:19:58.103 "trsvcid": "59589" 00:19:58.103 }, 00:19:58.103 "auth": { 00:19:58.103 "state": "completed", 00:19:58.103 "digest": "sha512", 00:19:58.103 "dhgroup": "ffdhe4096" 00:19:58.103 } 00:19:58.103 } 00:19:58.103 ]' 00:19:58.103 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.103 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.103 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.103 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:58.103 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.103 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.103 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.103 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.362 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjFlYzQ0YmMzNTNhNjY3MGQzNTljYzdlZWU3NDJjYjD+i/g4: --dhchap-ctrl-secret DHHC-1:02:NTFmMTc2NGJjOTZhOGFiZWU3NWIzNDE5YzhhMzVhMjkyMDNiMzg5YjFmNzA5NjcwMvoMkA==: 00:19:58.931 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.190 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:59.190 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.190 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.190 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.190 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.190 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:59.190 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:59.190 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:59.190 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.190 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:59.190 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:59.190 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:59.190 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.190 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.190 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.190 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.190 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.190 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.190 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.449 00:19:59.449 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.449 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.449 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.707 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.707 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.707 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.707 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.707 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.707 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.707 { 00:19:59.707 "cntlid": 125, 00:19:59.707 "qid": 0, 00:19:59.707 "state": "enabled", 00:19:59.707 "thread": "nvmf_tgt_poll_group_000", 00:19:59.707 "listen_address": { 00:19:59.707 "trtype": "RDMA", 00:19:59.707 "adrfam": "IPv4", 00:19:59.707 "traddr": "192.168.100.8", 00:19:59.707 "trsvcid": "4420" 00:19:59.707 }, 00:19:59.707 "peer_address": { 00:19:59.707 "trtype": "RDMA", 00:19:59.707 "adrfam": "IPv4", 00:19:59.707 "traddr": "192.168.100.8", 00:19:59.707 "trsvcid": "60782" 00:19:59.707 }, 00:19:59.707 "auth": { 00:19:59.707 "state": "completed", 00:19:59.707 "digest": "sha512", 00:19:59.707 "dhgroup": "ffdhe4096" 00:19:59.707 } 00:19:59.707 } 00:19:59.707 ]' 00:19:59.707 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.707 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:59.707 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.707 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:59.707 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.966 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.966 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.966 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.966 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjYzMmM1MzEyNTE1OWNkMGI0MjA3ZDUzOTY4Zjk1YWM0MjljM2Y5ZjNlNDkwZDYwK2jCRA==: --dhchap-ctrl-secret DHHC-1:01:M2U2M2RmODcwODMxOGI3NmQxMmI5M2ZlYjA2MzU3OTZxXvhE: 00:20:00.533 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.792 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:00.792 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.792 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.792 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.792 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.792 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:00.792 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:01.050 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:01.051 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.051 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:01.051 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:01.051 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:01.051 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.051 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:20:01.051 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.051 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.051 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.051 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.051 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.309 00:20:01.309 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.309 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.309 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.309 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.309 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.309 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.309 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.309 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.309 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.309 { 00:20:01.309 "cntlid": 127, 00:20:01.309 "qid": 0, 00:20:01.309 "state": "enabled", 00:20:01.309 "thread": "nvmf_tgt_poll_group_000", 00:20:01.309 "listen_address": { 00:20:01.309 "trtype": "RDMA", 00:20:01.309 "adrfam": "IPv4", 00:20:01.309 "traddr": "192.168.100.8", 00:20:01.309 "trsvcid": "4420" 00:20:01.309 }, 00:20:01.309 "peer_address": { 00:20:01.309 "trtype": "RDMA", 00:20:01.309 "adrfam": "IPv4", 00:20:01.309 "traddr": "192.168.100.8", 00:20:01.310 "trsvcid": "41804" 00:20:01.310 }, 00:20:01.310 "auth": { 00:20:01.310 "state": "completed", 00:20:01.310 "digest": "sha512", 00:20:01.310 "dhgroup": "ffdhe4096" 00:20:01.310 } 00:20:01.310 } 00:20:01.310 ]' 00:20:01.310 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.568 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:01.568 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.568 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:01.568 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.568 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.568 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.568 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.827 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE1NzE0NGZhNzRlYjcxNTkzNWMxNzRhNmRjOTFiZmYxMTAxOGU5MzE1OThhZmJhZTYxMDkxYTE0MGJkNDdiYlq6dDU=: 00:20:02.394 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.394 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:02.394 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.394 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.394 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.394 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.394 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.394 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:02.394 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:02.652 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:20:02.652 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.652 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:02.652 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:02.652 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:02.652 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.652 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.652 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.652 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.652 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.652 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.652 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.911 00:20:03.170 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.170 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.170 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.170 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.170 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.170 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.170 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.170 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.170 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.170 { 00:20:03.170 "cntlid": 129, 00:20:03.170 "qid": 0, 00:20:03.170 "state": "enabled", 00:20:03.170 "thread": "nvmf_tgt_poll_group_000", 00:20:03.170 "listen_address": { 00:20:03.170 "trtype": "RDMA", 00:20:03.170 "adrfam": "IPv4", 00:20:03.170 "traddr": "192.168.100.8", 00:20:03.170 "trsvcid": "4420" 00:20:03.170 }, 00:20:03.170 "peer_address": { 00:20:03.170 "trtype": "RDMA", 00:20:03.170 "adrfam": "IPv4", 00:20:03.170 "traddr": "192.168.100.8", 00:20:03.170 "trsvcid": "41045" 00:20:03.170 }, 00:20:03.170 "auth": { 00:20:03.170 "state": "completed", 00:20:03.170 "digest": "sha512", 00:20:03.170 "dhgroup": "ffdhe6144" 00:20:03.170 } 00:20:03.170 } 00:20:03.170 ]' 00:20:03.170 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.170 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.170 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.428 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:03.428 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.428 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.428 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.428 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.428 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTA4ODFjYjI0M2JkMDgyZmIzZjBkM2M0MzNjZGM5YTgxYzg3ZmZiN2U0NTcyN2M4RxA3hg==: --dhchap-ctrl-secret DHHC-1:03:MjlkMzQ5NTUzYjNjNDA4NjkzNTYzOTZhNTM4NmExMDlkNTYyMTZjMTBjOTQ4ODZhMGJmOGI1NGMzZTEzNTAwNROJz80=: 00:20:04.363 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.363 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:04.363 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.363 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.363 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.363 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.363 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:04.363 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:04.363 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:20:04.363 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.363 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:04.363 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:04.363 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:04.363 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.363 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.363 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.363 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.363 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.363 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.364 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.930 00:20:04.930 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.930 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.930 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.930 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.930 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.930 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.930 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.930 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.930 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.930 { 00:20:04.930 "cntlid": 131, 00:20:04.930 "qid": 0, 00:20:04.930 "state": "enabled", 00:20:04.930 "thread": "nvmf_tgt_poll_group_000", 00:20:04.930 "listen_address": { 00:20:04.930 "trtype": "RDMA", 00:20:04.930 "adrfam": "IPv4", 00:20:04.930 "traddr": "192.168.100.8", 00:20:04.930 "trsvcid": "4420" 00:20:04.930 }, 00:20:04.930 "peer_address": { 00:20:04.930 "trtype": "RDMA", 00:20:04.930 "adrfam": "IPv4", 00:20:04.930 "traddr": "192.168.100.8", 00:20:04.930 "trsvcid": "56397" 00:20:04.930 }, 00:20:04.930 "auth": { 00:20:04.931 "state": "completed", 00:20:04.931 "digest": "sha512", 00:20:04.931 "dhgroup": "ffdhe6144" 00:20:04.931 } 00:20:04.931 } 00:20:04.931 ]' 00:20:04.931 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.931 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:04.931 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.931 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:04.931 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.189 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.189 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.189 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.189 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjFlYzQ0YmMzNTNhNjY3MGQzNTljYzdlZWU3NDJjYjD+i/g4: --dhchap-ctrl-secret DHHC-1:02:NTFmMTc2NGJjOTZhOGFiZWU3NWIzNDE5YzhhMzVhMjkyMDNiMzg5YjFmNzA5NjcwMvoMkA==: 00:20:05.756 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.015 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:06.015 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.015 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.015 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.015 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.015 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:06.015 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:06.274 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:20:06.274 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.274 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:06.274 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:06.274 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:06.274 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.274 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.274 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.274 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.274 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.274 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.274 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.533 00:20:06.533 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.533 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.533 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.792 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.792 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.792 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.792 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.792 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.792 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.792 { 00:20:06.792 "cntlid": 133, 00:20:06.792 "qid": 0, 00:20:06.792 "state": "enabled", 00:20:06.792 "thread": "nvmf_tgt_poll_group_000", 00:20:06.792 "listen_address": { 00:20:06.792 "trtype": "RDMA", 00:20:06.792 "adrfam": "IPv4", 00:20:06.792 "traddr": "192.168.100.8", 00:20:06.792 "trsvcid": "4420" 00:20:06.792 }, 00:20:06.792 "peer_address": { 00:20:06.792 "trtype": "RDMA", 00:20:06.792 "adrfam": "IPv4", 00:20:06.792 "traddr": "192.168.100.8", 00:20:06.792 "trsvcid": "59967" 00:20:06.792 }, 00:20:06.792 "auth": { 00:20:06.792 "state": "completed", 00:20:06.792 "digest": "sha512", 00:20:06.792 "dhgroup": "ffdhe6144" 00:20:06.792 } 00:20:06.792 } 00:20:06.792 ]' 00:20:06.792 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.792 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:06.792 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.792 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:06.792 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.792 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.792 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.792 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.051 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjYzMmM1MzEyNTE1OWNkMGI0MjA3ZDUzOTY4Zjk1YWM0MjljM2Y5ZjNlNDkwZDYwK2jCRA==: --dhchap-ctrl-secret DHHC-1:01:M2U2M2RmODcwODMxOGI3NmQxMmI5M2ZlYjA2MzU3OTZxXvhE: 00:20:07.618 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.618 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:07.618 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.618 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.877 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.877 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.877 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:07.877 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:07.877 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:20:07.877 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.877 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:07.877 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:07.877 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:07.877 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.877 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:20:07.877 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.877 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.877 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.877 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.877 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:08.136 00:20:08.136 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.136 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.136 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.395 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.395 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.395 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.395 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.395 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.395 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.395 { 00:20:08.395 "cntlid": 135, 00:20:08.395 "qid": 0, 00:20:08.395 "state": "enabled", 00:20:08.395 "thread": "nvmf_tgt_poll_group_000", 00:20:08.395 "listen_address": { 00:20:08.395 "trtype": "RDMA", 00:20:08.395 "adrfam": "IPv4", 00:20:08.395 "traddr": "192.168.100.8", 00:20:08.395 "trsvcid": "4420" 00:20:08.395 }, 00:20:08.395 "peer_address": { 00:20:08.395 "trtype": "RDMA", 00:20:08.395 "adrfam": "IPv4", 00:20:08.395 "traddr": "192.168.100.8", 00:20:08.395 "trsvcid": "33153" 00:20:08.395 }, 00:20:08.395 "auth": { 00:20:08.395 "state": "completed", 00:20:08.395 "digest": "sha512", 00:20:08.395 "dhgroup": "ffdhe6144" 00:20:08.395 } 00:20:08.395 } 00:20:08.395 ]' 00:20:08.395 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.395 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:08.395 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.654 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:08.654 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.654 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.654 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.654 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.654 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE1NzE0NGZhNzRlYjcxNTkzNWMxNzRhNmRjOTFiZmYxMTAxOGU5MzE1OThhZmJhZTYxMDkxYTE0MGJkNDdiYlq6dDU=: 00:20:09.590 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.590 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:09.590 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.590 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.590 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.590 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.590 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.590 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:09.590 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:09.590 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:09.590 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.590 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:09.590 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:09.590 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:09.590 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.590 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.590 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.590 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.590 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.590 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.590 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.158 00:20:10.158 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.158 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.158 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.417 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.417 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.417 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.417 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.417 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.417 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.417 { 00:20:10.417 "cntlid": 137, 00:20:10.417 "qid": 0, 00:20:10.417 "state": "enabled", 00:20:10.417 "thread": "nvmf_tgt_poll_group_000", 00:20:10.417 "listen_address": { 00:20:10.417 "trtype": "RDMA", 00:20:10.417 "adrfam": "IPv4", 00:20:10.417 "traddr": "192.168.100.8", 00:20:10.417 "trsvcid": "4420" 00:20:10.417 }, 00:20:10.417 "peer_address": { 00:20:10.417 "trtype": "RDMA", 00:20:10.417 "adrfam": "IPv4", 00:20:10.417 "traddr": "192.168.100.8", 00:20:10.417 "trsvcid": "39138" 00:20:10.417 }, 00:20:10.417 "auth": { 00:20:10.417 "state": "completed", 00:20:10.417 "digest": "sha512", 00:20:10.417 "dhgroup": "ffdhe8192" 00:20:10.417 } 00:20:10.418 } 00:20:10.418 ]' 00:20:10.418 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.418 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:10.418 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.418 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:10.418 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.418 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.418 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.418 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.677 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTA4ODFjYjI0M2JkMDgyZmIzZjBkM2M0MzNjZGM5YTgxYzg3ZmZiN2U0NTcyN2M4RxA3hg==: --dhchap-ctrl-secret DHHC-1:03:MjlkMzQ5NTUzYjNjNDA4NjkzNTYzOTZhNTM4NmExMDlkNTYyMTZjMTBjOTQ4ODZhMGJmOGI1NGMzZTEzNTAwNROJz80=: 00:20:11.243 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.243 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:11.243 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.243 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.244 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.244 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.244 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:11.244 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:11.502 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:11.502 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.502 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:11.502 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:11.502 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:11.502 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.502 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.502 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.502 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.502 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.502 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.502 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.069 00:20:12.069 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.069 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.069 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.069 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.069 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.069 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.069 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.327 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.328 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.328 { 00:20:12.328 "cntlid": 139, 00:20:12.328 "qid": 0, 00:20:12.328 "state": "enabled", 00:20:12.328 "thread": "nvmf_tgt_poll_group_000", 00:20:12.328 "listen_address": { 00:20:12.328 "trtype": "RDMA", 00:20:12.328 "adrfam": "IPv4", 00:20:12.328 "traddr": "192.168.100.8", 00:20:12.328 "trsvcid": "4420" 00:20:12.328 }, 00:20:12.328 "peer_address": { 00:20:12.328 "trtype": "RDMA", 00:20:12.328 "adrfam": "IPv4", 00:20:12.328 "traddr": "192.168.100.8", 00:20:12.328 "trsvcid": "37457" 00:20:12.328 }, 00:20:12.328 "auth": { 00:20:12.328 "state": "completed", 00:20:12.328 "digest": "sha512", 00:20:12.328 "dhgroup": "ffdhe8192" 00:20:12.328 } 00:20:12.328 } 00:20:12.328 ]' 00:20:12.328 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.328 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:12.328 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.328 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:12.328 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.328 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.328 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.328 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.585 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjFlYzQ0YmMzNTNhNjY3MGQzNTljYzdlZWU3NDJjYjD+i/g4: --dhchap-ctrl-secret DHHC-1:02:NTFmMTc2NGJjOTZhOGFiZWU3NWIzNDE5YzhhMzVhMjkyMDNiMzg5YjFmNzA5NjcwMvoMkA==: 00:20:13.152 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.152 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:13.152 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.152 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.152 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.152 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.152 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:13.152 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:13.410 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:20:13.410 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.410 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:13.410 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:13.410 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:13.410 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.410 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.410 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.410 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.410 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.410 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.410 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.978 00:20:13.978 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.978 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.978 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.978 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.978 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.978 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.978 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.978 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.978 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.978 { 00:20:13.978 "cntlid": 141, 00:20:13.978 "qid": 0, 00:20:13.978 "state": "enabled", 00:20:13.978 "thread": "nvmf_tgt_poll_group_000", 00:20:13.978 "listen_address": { 00:20:13.978 "trtype": "RDMA", 00:20:13.978 "adrfam": "IPv4", 00:20:13.978 "traddr": "192.168.100.8", 00:20:13.978 "trsvcid": "4420" 00:20:13.978 }, 00:20:13.978 "peer_address": { 00:20:13.978 "trtype": "RDMA", 00:20:13.978 "adrfam": "IPv4", 00:20:13.978 "traddr": "192.168.100.8", 00:20:13.978 "trsvcid": "57026" 00:20:13.978 }, 00:20:13.978 "auth": { 00:20:13.978 "state": "completed", 00:20:13.978 "digest": "sha512", 00:20:13.978 "dhgroup": "ffdhe8192" 00:20:13.978 } 00:20:13.978 } 00:20:13.978 ]' 00:20:13.978 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.237 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.237 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.237 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:14.237 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.237 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.237 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.237 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.538 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjYzMmM1MzEyNTE1OWNkMGI0MjA3ZDUzOTY4Zjk1YWM0MjljM2Y5ZjNlNDkwZDYwK2jCRA==: --dhchap-ctrl-secret DHHC-1:01:M2U2M2RmODcwODMxOGI3NmQxMmI5M2ZlYjA2MzU3OTZxXvhE: 00:20:15.161 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.161 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:15.161 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.161 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.161 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.161 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.161 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:15.161 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:15.161 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:15.161 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.161 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:15.161 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:15.161 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:15.161 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.161 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:20:15.161 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.161 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.161 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.161 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.161 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.728 00:20:15.728 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.728 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.728 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.987 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.987 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.987 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.987 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.987 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.987 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.987 { 00:20:15.987 "cntlid": 143, 00:20:15.987 "qid": 0, 00:20:15.987 "state": "enabled", 00:20:15.987 "thread": "nvmf_tgt_poll_group_000", 00:20:15.987 "listen_address": { 00:20:15.987 "trtype": "RDMA", 00:20:15.987 "adrfam": "IPv4", 00:20:15.987 "traddr": "192.168.100.8", 00:20:15.987 "trsvcid": "4420" 00:20:15.987 }, 00:20:15.987 "peer_address": { 00:20:15.987 "trtype": "RDMA", 00:20:15.987 "adrfam": "IPv4", 00:20:15.987 "traddr": "192.168.100.8", 00:20:15.987 "trsvcid": "35396" 00:20:15.987 }, 00:20:15.987 "auth": { 00:20:15.987 "state": "completed", 00:20:15.987 "digest": "sha512", 00:20:15.987 "dhgroup": "ffdhe8192" 00:20:15.987 } 00:20:15.987 } 00:20:15.987 ]' 00:20:15.987 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.987 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:15.987 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.987 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:15.987 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.988 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.988 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.988 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.246 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE1NzE0NGZhNzRlYjcxNTkzNWMxNzRhNmRjOTFiZmYxMTAxOGU5MzE1OThhZmJhZTYxMDkxYTE0MGJkNDdiYlq6dDU=: 00:20:16.814 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.073 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:17.073 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.073 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.073 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.073 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:17.073 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:17.073 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:17.073 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:17.073 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:17.073 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:17.073 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:17.073 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.073 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:17.073 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:17.073 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:17.073 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.073 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.073 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.073 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.073 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.073 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.073 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.640 00:20:17.640 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.640 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.640 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.899 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.899 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.899 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.899 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.899 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.899 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.899 { 00:20:17.899 "cntlid": 145, 00:20:17.899 "qid": 0, 00:20:17.899 "state": "enabled", 00:20:17.899 "thread": "nvmf_tgt_poll_group_000", 00:20:17.899 "listen_address": { 00:20:17.899 "trtype": "RDMA", 00:20:17.899 "adrfam": "IPv4", 00:20:17.899 "traddr": "192.168.100.8", 00:20:17.899 "trsvcid": "4420" 00:20:17.899 }, 00:20:17.899 "peer_address": { 00:20:17.899 "trtype": "RDMA", 00:20:17.899 "adrfam": "IPv4", 00:20:17.899 "traddr": "192.168.100.8", 00:20:17.899 "trsvcid": "39246" 00:20:17.899 }, 00:20:17.899 "auth": { 00:20:17.899 "state": "completed", 00:20:17.899 "digest": "sha512", 00:20:17.899 "dhgroup": "ffdhe8192" 00:20:17.899 } 00:20:17.899 } 00:20:17.899 ]' 00:20:17.899 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.899 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:17.899 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.899 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:17.899 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.899 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.899 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.899 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.158 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTA4ODFjYjI0M2JkMDgyZmIzZjBkM2M0MzNjZGM5YTgxYzg3ZmZiN2U0NTcyN2M4RxA3hg==: --dhchap-ctrl-secret DHHC-1:03:MjlkMzQ5NTUzYjNjNDA4NjkzNTYzOTZhNTM4NmExMDlkNTYyMTZjMTBjOTQ4ODZhMGJmOGI1NGMzZTEzNTAwNROJz80=: 00:20:18.724 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.983 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:18.983 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.983 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.983 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.983 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 00:20:18.983 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.984 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.984 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.984 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:18.984 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:18.984 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:18.984 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:18.984 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.984 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:18.984 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.984 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:18.984 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:51.055 request: 00:20:51.055 { 00:20:51.055 "name": "nvme0", 00:20:51.055 "trtype": "rdma", 00:20:51.055 "traddr": "192.168.100.8", 00:20:51.055 "adrfam": "ipv4", 00:20:51.055 "trsvcid": "4420", 00:20:51.055 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:51.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:20:51.055 "prchk_reftag": false, 00:20:51.055 "prchk_guard": false, 00:20:51.055 "hdgst": false, 00:20:51.055 "ddgst": false, 00:20:51.055 "dhchap_key": "key2", 00:20:51.055 "method": "bdev_nvme_attach_controller", 00:20:51.055 "req_id": 1 00:20:51.055 } 00:20:51.055 Got JSON-RPC error response 00:20:51.055 response: 00:20:51.055 { 00:20:51.055 "code": -5, 00:20:51.055 "message": "Input/output error" 00:20:51.055 } 00:20:51.055 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:51.055 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:51.055 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:51.055 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:51.055 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:51.055 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.055 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.055 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.055 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.055 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.055 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.055 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.055 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:51.055 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:51.055 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:51.055 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:51.055 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:51.055 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:51.055 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:51.055 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:51.055 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:51.055 request: 00:20:51.055 { 00:20:51.055 "name": "nvme0", 00:20:51.055 "trtype": "rdma", 00:20:51.055 "traddr": "192.168.100.8", 00:20:51.055 "adrfam": "ipv4", 00:20:51.055 "trsvcid": "4420", 00:20:51.055 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:51.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:20:51.055 "prchk_reftag": false, 00:20:51.055 "prchk_guard": false, 00:20:51.055 "hdgst": false, 00:20:51.055 "ddgst": false, 00:20:51.055 "dhchap_key": "key1", 00:20:51.055 "dhchap_ctrlr_key": "ckey2", 00:20:51.055 "method": "bdev_nvme_attach_controller", 00:20:51.055 "req_id": 1 00:20:51.055 } 00:20:51.055 Got JSON-RPC error response 00:20:51.055 response: 00:20:51.055 { 00:20:51.055 "code": -5, 00:20:51.055 "message": "Input/output error" 00:20:51.055 } 00:20:51.055 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:51.055 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:51.055 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:51.055 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:51.055 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:51.055 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.055 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.055 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.055 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 00:20:51.055 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.055 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.055 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.055 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.055 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:51.055 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.055 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:51.055 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:51.055 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:51.055 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:51.055 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.055 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.131 request: 00:21:23.131 { 00:21:23.131 "name": "nvme0", 00:21:23.131 "trtype": "rdma", 00:21:23.131 "traddr": "192.168.100.8", 00:21:23.131 "adrfam": "ipv4", 00:21:23.131 "trsvcid": "4420", 00:21:23.131 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:23.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:21:23.131 "prchk_reftag": false, 00:21:23.131 "prchk_guard": false, 00:21:23.131 "hdgst": false, 00:21:23.131 "ddgst": false, 00:21:23.131 "dhchap_key": "key1", 00:21:23.131 "dhchap_ctrlr_key": "ckey1", 00:21:23.131 "method": "bdev_nvme_attach_controller", 00:21:23.131 "req_id": 1 00:21:23.131 } 00:21:23.131 Got JSON-RPC error response 00:21:23.131 response: 00:21:23.131 { 00:21:23.131 "code": -5, 00:21:23.131 "message": "Input/output error" 00:21:23.131 } 00:21:23.131 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:23.131 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:23.131 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:23.131 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:23.131 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:21:23.131 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.131 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.131 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.131 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2233058 00:21:23.131 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2233058 ']' 00:21:23.131 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2233058 00:21:23.131 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:23.131 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:23.131 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2233058 00:21:23.131 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:23.131 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:23.131 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2233058' 00:21:23.131 killing process with pid 2233058 00:21:23.131 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2233058 00:21:23.131 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2233058 00:21:23.131 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:23.131 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:23.131 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:23.131 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.131 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2265593 00:21:23.131 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:23.131 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2265593 00:21:23.131 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2265593 ']' 00:21:23.131 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.131 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:23.131 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.131 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:23.131 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.131 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:23.131 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:23.131 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:23.131 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2265593 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2265593 ']' 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:23.132 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:23.132 00:21:23.132 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.132 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.132 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.132 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.132 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.132 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.132 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.132 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.132 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:23.132 { 00:21:23.132 "cntlid": 1, 00:21:23.132 "qid": 0, 00:21:23.132 "state": "enabled", 00:21:23.132 "thread": "nvmf_tgt_poll_group_000", 00:21:23.132 "listen_address": { 00:21:23.132 "trtype": "RDMA", 00:21:23.132 "adrfam": "IPv4", 00:21:23.132 "traddr": "192.168.100.8", 00:21:23.132 "trsvcid": "4420" 00:21:23.132 }, 00:21:23.132 "peer_address": { 00:21:23.132 "trtype": "RDMA", 00:21:23.132 "adrfam": "IPv4", 00:21:23.132 "traddr": "192.168.100.8", 00:21:23.132 "trsvcid": "40831" 00:21:23.132 }, 00:21:23.132 "auth": { 00:21:23.132 "state": "completed", 00:21:23.132 "digest": "sha512", 00:21:23.132 "dhgroup": "ffdhe8192" 00:21:23.132 } 00:21:23.132 } 00:21:23.132 ]' 00:21:23.132 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.132 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.132 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:23.132 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:23.132 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:23.132 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.132 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.132 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.132 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGE1NzE0NGZhNzRlYjcxNTkzNWMxNzRhNmRjOTFiZmYxMTAxOGU5MzE1OThhZmJhZTYxMDkxYTE0MGJkNDdiYlq6dDU=: 00:21:23.132 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.132 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:21:23.132 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.132 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.132 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.132 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:21:23.132 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.132 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.132 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.132 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:23.132 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:23.132 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:23.132 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:23.132 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:23.133 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:23.133 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.133 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:23.133 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.133 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:23.133 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:55.212 request: 00:21:55.212 { 00:21:55.212 "name": "nvme0", 00:21:55.212 "trtype": "rdma", 00:21:55.212 "traddr": "192.168.100.8", 00:21:55.212 "adrfam": "ipv4", 00:21:55.212 "trsvcid": "4420", 00:21:55.212 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:55.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:21:55.212 "prchk_reftag": false, 00:21:55.212 "prchk_guard": false, 00:21:55.212 "hdgst": false, 00:21:55.212 "ddgst": false, 00:21:55.212 "dhchap_key": "key3", 00:21:55.212 "method": "bdev_nvme_attach_controller", 00:21:55.212 "req_id": 1 00:21:55.212 } 00:21:55.212 Got JSON-RPC error response 00:21:55.212 response: 00:21:55.212 { 00:21:55.212 "code": -5, 00:21:55.212 "message": "Input/output error" 00:21:55.212 } 00:21:55.212 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:55.212 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:55.212 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:55.212 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:55.212 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:55.212 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:55.212 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:55.212 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:55.212 10:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:55.212 10:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:55.212 10:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:55.212 10:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:55.212 10:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.212 10:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:55.212 10:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.212 10:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:55.212 10:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:27.322 request: 00:22:27.322 { 00:22:27.322 "name": "nvme0", 00:22:27.322 "trtype": "rdma", 00:22:27.322 "traddr": "192.168.100.8", 00:22:27.322 "adrfam": "ipv4", 00:22:27.322 "trsvcid": "4420", 00:22:27.322 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:27.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:22:27.322 "prchk_reftag": false, 00:22:27.322 "prchk_guard": false, 00:22:27.322 "hdgst": false, 00:22:27.322 "ddgst": false, 00:22:27.322 "dhchap_key": "key3", 00:22:27.322 "method": "bdev_nvme_attach_controller", 00:22:27.322 "req_id": 1 00:22:27.322 } 00:22:27.322 Got JSON-RPC error response 00:22:27.322 response: 00:22:27.322 { 00:22:27.322 "code": -5, 00:22:27.322 "message": "Input/output error" 00:22:27.322 } 00:22:27.322 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:27.322 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:27.322 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:27.322 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:27.322 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:27.322 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:27.322 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:27.322 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:27.322 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:27.322 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:27.322 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:27.322 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.322 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.322 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.322 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:27.322 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.322 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.322 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.322 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:27.322 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:27.323 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:27.323 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:27.323 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.323 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:27.323 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.323 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:27.323 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:27.323 request: 00:22:27.323 { 00:22:27.323 "name": "nvme0", 00:22:27.323 "trtype": "rdma", 00:22:27.323 "traddr": "192.168.100.8", 00:22:27.323 "adrfam": "ipv4", 00:22:27.323 "trsvcid": "4420", 00:22:27.323 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:27.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:22:27.323 "prchk_reftag": false, 00:22:27.323 "prchk_guard": false, 00:22:27.323 "hdgst": false, 00:22:27.323 "ddgst": false, 00:22:27.323 "dhchap_key": "key0", 00:22:27.323 "dhchap_ctrlr_key": "key1", 00:22:27.323 "method": "bdev_nvme_attach_controller", 00:22:27.323 "req_id": 1 00:22:27.323 } 00:22:27.323 Got JSON-RPC error response 00:22:27.323 response: 00:22:27.323 { 00:22:27.323 "code": -5, 00:22:27.323 "message": "Input/output error" 00:22:27.323 } 00:22:27.323 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:27.323 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:27.323 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:27.323 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:27.323 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:27.323 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:27.323 00:22:27.323 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:27.323 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:27.323 10:44:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2233077 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2233077 ']' 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2233077 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2233077 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2233077' 00:22:27.323 killing process with pid 2233077 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2233077 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2233077 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:27.323 rmmod nvme_rdma 00:22:27.323 rmmod nvme_fabrics 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2265593 ']' 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2265593 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2265593 ']' 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2265593 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2265593 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2265593' 00:22:27.323 killing process with pid 2265593 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2265593 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2265593 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.dDI /tmp/spdk.key-sha256.MiM /tmp/spdk.key-sha384.FSj /tmp/spdk.key-sha512.Sf3 /tmp/spdk.key-sha512.Gad /tmp/spdk.key-sha384.ik9 /tmp/spdk.key-sha256.T0t '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:22:27.323 00:22:27.323 real 4m20.390s 00:22:27.323 user 9m23.243s 00:22:27.323 sys 0m19.066s 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.323 ************************************ 00:22:27.323 END TEST nvmf_auth_target 00:22:27.323 ************************************ 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:27.323 10:44:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:27.323 ************************************ 00:22:27.323 START TEST nvmf_fuzz 00:22:27.323 ************************************ 00:22:27.323 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:22:27.323 * Looking for test storage... 00:22:27.323 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:27.323 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:27.323 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:22:27.323 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:27.323 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:27.323 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:27.323 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:27.323 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:27.323 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:27.323 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:27.323 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:27.323 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:27.323 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:22:27.324 10:44:33 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:31.513 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.513 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:22:31.513 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:31.513 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:31.513 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:31.513 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:31.513 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:31.513 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:22:31.513 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:31.513 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:22:31.513 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:22:31.513 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:22:31.514 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:22:31.514 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:22:31.514 Found net devices under 0000:da:00.0: mlx_0_0 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:22:31.514 Found net devices under 0000:da:00.1: mlx_0_1 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # rdma_device_init 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # uname 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:31.514 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:31.515 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:31.515 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:22:31.515 altname enp218s0f0np0 00:22:31.515 altname ens818f0np0 00:22:31.515 inet 192.168.100.8/24 scope global mlx_0_0 00:22:31.515 valid_lft forever preferred_lft forever 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:31.515 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:31.515 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:22:31.515 altname enp218s0f1np1 00:22:31.515 altname ens818f1np1 00:22:31.515 inet 192.168.100.9/24 scope global mlx_0_1 00:22:31.515 valid_lft forever preferred_lft forever 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:31.515 192.168.100.9' 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:31.515 192.168.100.9' 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@457 -- # head -n 1 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:31.515 192.168.100.9' 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@458 -- # tail -n +2 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@458 -- # head -n 1 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2279096 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2279096 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2279096 ']' 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:31.515 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:22:31.516 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:31.516 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.516 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:31.516 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.516 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:22:31.516 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.516 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:31.516 Malloc0 00:22:31.516 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.516 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:31.516 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.516 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:31.516 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.516 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:31.516 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.516 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:31.516 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.516 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:31.516 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.516 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:31.516 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.516 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:22:31.516 10:44:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:23:03.590 Fuzzing completed. Shutting down the fuzz application 00:23:03.590 00:23:03.590 Dumping successful admin opcodes: 00:23:03.590 8, 9, 10, 24, 00:23:03.590 Dumping successful io opcodes: 00:23:03.590 0, 9, 00:23:03.590 NS: 0x200003af1f00 I/O qp, Total commands completed: 1270167, total successful commands: 7483, random_seed: 3617296896 00:23:03.590 NS: 0x200003af1f00 admin qp, Total commands completed: 169162, total successful commands: 1379, random_seed: 2890699584 00:23:03.590 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:03.590 Fuzzing completed. Shutting down the fuzz application 00:23:03.590 00:23:03.590 Dumping successful admin opcodes: 00:23:03.590 24, 00:23:03.590 Dumping successful io opcodes: 00:23:03.590 00:23:03.590 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 288085776 00:23:03.590 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 288150036 00:23:03.590 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:03.590 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.590 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:03.590 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.590 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:03.590 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:03.590 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:03.590 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:23:03.590 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:03.590 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:03.590 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:23:03.590 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:03.590 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:03.590 rmmod nvme_rdma 00:23:03.590 rmmod nvme_fabrics 00:23:03.590 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:03.590 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 2279096 ']' 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 2279096 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2279096 ']' 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 2279096 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2279096 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2279096' 00:23:03.591 killing process with pid 2279096 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 2279096 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 2279096 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:03.591 00:23:03.591 real 0m37.649s 00:23:03.591 user 0m51.219s 00:23:03.591 sys 0m17.927s 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:03.591 ************************************ 00:23:03.591 END TEST nvmf_fuzz 00:23:03.591 ************************************ 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:03.591 ************************************ 00:23:03.591 START TEST nvmf_multiconnection 00:23:03.591 ************************************ 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:23:03.591 * Looking for test storage... 00:23:03.591 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:23:03.591 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:08.861 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:08.861 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:23:08.861 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:23:08.862 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:23:08.862 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:23:08.862 Found net devices under 0000:da:00.0: mlx_0_0 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:23:08.862 Found net devices under 0000:da:00.1: mlx_0_1 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # rdma_device_init 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # uname 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:08.862 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:08.863 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:08.863 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:23:08.863 altname enp218s0f0np0 00:23:08.863 altname ens818f0np0 00:23:08.863 inet 192.168.100.8/24 scope global mlx_0_0 00:23:08.863 valid_lft forever preferred_lft forever 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:08.863 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:08.863 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:23:08.863 altname enp218s0f1np1 00:23:08.863 altname ens818f1np1 00:23:08.863 inet 192.168.100.9/24 scope global mlx_0_1 00:23:08.863 valid_lft forever preferred_lft forever 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:08.863 192.168.100.9' 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:08.863 192.168.100.9' 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@457 -- # head -n 1 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:08.863 192.168.100.9' 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@458 -- # tail -n +2 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@458 -- # head -n 1 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=2287793 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 2287793 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 2287793 ']' 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:08.863 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:08.863 [2024-07-24 10:45:16.312868] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:23:08.863 [2024-07-24 10:45:16.312919] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.121 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.121 [2024-07-24 10:45:16.369743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:09.121 [2024-07-24 10:45:16.413712] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.121 [2024-07-24 10:45:16.413753] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.121 [2024-07-24 10:45:16.413762] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.121 [2024-07-24 10:45:16.413768] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.121 [2024-07-24 10:45:16.413774] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.121 [2024-07-24 10:45:16.413816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.121 [2024-07-24 10:45:16.413914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.121 [2024-07-24 10:45:16.413982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:09.121 [2024-07-24 10:45:16.413983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.121 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:09.121 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:23:09.121 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:09.121 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:09.122 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.122 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.122 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:09.122 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.122 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.122 [2024-07-24 10:45:16.570612] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8636a0/0x867b70) succeed. 00:23:09.380 [2024-07-24 10:45:16.579678] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x864c90/0x8a9200) succeed. 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.380 Malloc1 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.380 [2024-07-24 10:45:16.747819] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.380 Malloc2 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.380 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.381 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.381 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:23:09.381 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.381 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.381 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.381 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:09.381 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:23:09.381 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.381 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.381 Malloc3 00:23:09.381 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.381 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:23:09.381 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.381 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.381 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.381 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:23:09.381 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.381 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.381 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.381 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:23:09.381 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.381 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.640 Malloc4 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.640 Malloc5 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.640 Malloc6 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.640 Malloc7 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.640 10:45:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.640 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.640 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:23:09.640 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.640 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.640 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.640 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:23:09.640 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.640 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.640 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.640 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:09.640 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:23:09.640 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.640 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.640 Malloc8 00:23:09.640 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.640 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:23:09.641 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.641 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.641 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.641 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:23:09.641 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.641 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.641 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.641 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:23:09.641 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.641 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.641 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.641 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:09.641 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:23:09.641 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.641 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.641 Malloc9 00:23:09.641 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.641 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:23:09.641 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.641 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.900 Malloc10 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.900 Malloc11 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:09.900 10:45:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:23:10.836 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:23:10.836 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:10.836 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:10.836 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:10.836 10:45:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:12.738 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:12.738 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:12.738 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:23:12.738 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:12.738 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:12.738 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:12.738 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:12.738 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:23:14.113 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:23:14.113 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:14.113 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:14.113 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:14.113 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:16.066 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:16.066 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:16.066 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:23:16.066 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:16.066 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:16.066 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:16.066 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.066 10:45:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:23:16.995 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:23:16.995 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:16.995 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:16.995 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:16.995 10:45:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:18.893 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:18.893 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:18.893 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:23:18.893 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:18.893 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:18.893 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:18.893 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:18.893 10:45:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:23:19.828 10:45:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:23:19.828 10:45:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:19.828 10:45:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:19.828 10:45:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:19.828 10:45:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:21.730 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:21.730 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:21.730 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:23:21.730 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:21.730 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:21.730 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:21.730 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:21.730 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:23:22.694 10:45:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:23:22.694 10:45:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:22.694 10:45:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:22.694 10:45:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:22.694 10:45:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:25.223 10:45:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:25.223 10:45:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:25.223 10:45:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:23:25.223 10:45:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:25.223 10:45:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:25.223 10:45:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:25.223 10:45:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:25.224 10:45:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:23:25.790 10:45:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:23:25.790 10:45:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:25.790 10:45:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:25.790 10:45:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:25.790 10:45:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:27.691 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:27.691 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:27.691 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:23:27.691 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:27.691 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:27.691 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:27.691 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:27.691 10:45:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:23:29.066 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:23:29.066 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:29.066 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:29.066 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:29.066 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:30.968 10:45:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:30.968 10:45:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:30.968 10:45:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:23:30.968 10:45:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:30.968 10:45:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:30.968 10:45:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:30.968 10:45:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.968 10:45:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:23:31.905 10:45:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:23:31.905 10:45:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:31.905 10:45:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:31.905 10:45:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:31.905 10:45:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:33.809 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:33.809 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:33.809 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:23:33.809 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:33.809 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:33.809 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:33.809 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:33.809 10:45:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:23:34.745 10:45:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:23:34.745 10:45:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:34.745 10:45:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:34.745 10:45:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:34.745 10:45:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:37.275 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:37.275 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:37.275 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:23:37.275 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:37.275 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:37.275 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:37.275 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:37.275 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:23:37.842 10:45:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:23:37.842 10:45:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:37.842 10:45:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:37.842 10:45:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:37.842 10:45:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:39.741 10:45:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:39.741 10:45:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:39.741 10:45:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:23:39.741 10:45:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:39.741 10:45:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:39.741 10:45:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:39.741 10:45:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:39.741 10:45:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:23:40.674 10:45:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:23:40.674 10:45:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:40.674 10:45:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:40.674 10:45:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:40.674 10:45:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:43.259 10:45:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:43.259 10:45:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:43.259 10:45:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:23:43.259 10:45:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:43.259 10:45:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:43.259 10:45:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:43.259 10:45:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:23:43.259 [global] 00:23:43.259 thread=1 00:23:43.259 invalidate=1 00:23:43.259 rw=read 00:23:43.259 time_based=1 00:23:43.259 runtime=10 00:23:43.259 ioengine=libaio 00:23:43.259 direct=1 00:23:43.260 bs=262144 00:23:43.260 iodepth=64 00:23:43.260 norandommap=1 00:23:43.260 numjobs=1 00:23:43.260 00:23:43.260 [job0] 00:23:43.260 filename=/dev/nvme0n1 00:23:43.260 [job1] 00:23:43.260 filename=/dev/nvme10n1 00:23:43.260 [job2] 00:23:43.260 filename=/dev/nvme1n1 00:23:43.260 [job3] 00:23:43.260 filename=/dev/nvme2n1 00:23:43.260 [job4] 00:23:43.260 filename=/dev/nvme3n1 00:23:43.260 [job5] 00:23:43.260 filename=/dev/nvme4n1 00:23:43.260 [job6] 00:23:43.260 filename=/dev/nvme5n1 00:23:43.260 [job7] 00:23:43.260 filename=/dev/nvme6n1 00:23:43.260 [job8] 00:23:43.260 filename=/dev/nvme7n1 00:23:43.260 [job9] 00:23:43.260 filename=/dev/nvme8n1 00:23:43.260 [job10] 00:23:43.260 filename=/dev/nvme9n1 00:23:43.260 Could not set queue depth (nvme0n1) 00:23:43.260 Could not set queue depth (nvme10n1) 00:23:43.260 Could not set queue depth (nvme1n1) 00:23:43.260 Could not set queue depth (nvme2n1) 00:23:43.260 Could not set queue depth (nvme3n1) 00:23:43.260 Could not set queue depth (nvme4n1) 00:23:43.260 Could not set queue depth (nvme5n1) 00:23:43.260 Could not set queue depth (nvme6n1) 00:23:43.260 Could not set queue depth (nvme7n1) 00:23:43.260 Could not set queue depth (nvme8n1) 00:23:43.260 Could not set queue depth (nvme9n1) 00:23:43.260 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:43.260 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:43.260 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:43.260 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:43.260 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:43.260 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:43.260 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:43.260 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:43.260 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:43.260 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:43.260 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:43.260 fio-3.35 00:23:43.260 Starting 11 threads 00:23:55.469 00:23:55.469 job0: (groupid=0, jobs=1): err= 0: pid=2293825: Wed Jul 24 10:46:00 2024 00:23:55.469 read: IOPS=1503, BW=376MiB/s (394MB/s)(3769MiB/10027msec) 00:23:55.469 slat (usec): min=9, max=14769, avg=652.00, stdev=1576.08 00:23:55.469 clat (usec): min=8704, max=62437, avg=41883.20, stdev=8482.03 00:23:55.469 lat (usec): min=8904, max=62488, avg=42535.20, stdev=8696.28 00:23:55.469 clat percentiles (usec): 00:23:55.469 | 1.00th=[27919], 5.00th=[30802], 10.00th=[31327], 20.00th=[32375], 00:23:55.469 | 30.00th=[33162], 40.00th=[35914], 50.00th=[46924], 60.00th=[47449], 00:23:55.469 | 70.00th=[48497], 80.00th=[49021], 90.00th=[50594], 95.00th=[51643], 00:23:55.469 | 99.00th=[55313], 99.50th=[56886], 99.90th=[58983], 99.95th=[60031], 00:23:55.469 | 99.99th=[61080] 00:23:55.469 bw ( KiB/s): min=325120, max=503808, per=9.73%, avg=384298.50, stdev=75260.23, samples=20 00:23:55.469 iops : min= 1270, max= 1968, avg=1501.10, stdev=293.93, samples=20 00:23:55.469 lat (msec) : 10=0.05%, 20=0.41%, 50=87.25%, 100=12.29% 00:23:55.469 cpu : usr=0.39%, sys=5.40%, ctx=3061, majf=0, minf=3347 00:23:55.469 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:55.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.469 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:55.469 issued rwts: total=15074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.469 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:55.469 job1: (groupid=0, jobs=1): err= 0: pid=2293838: Wed Jul 24 10:46:00 2024 00:23:55.469 read: IOPS=1585, BW=396MiB/s (416MB/s)(3975MiB/10027msec) 00:23:55.470 slat (usec): min=10, max=14499, avg=619.96, stdev=1544.31 00:23:55.470 clat (usec): min=11049, max=75942, avg=39699.19, stdev=13779.42 00:23:55.470 lat (usec): min=11249, max=76752, avg=40319.15, stdev=14036.06 00:23:55.470 clat percentiles (usec): 00:23:55.470 | 1.00th=[14484], 5.00th=[15926], 10.00th=[16450], 20.00th=[30802], 00:23:55.470 | 30.00th=[32375], 40.00th=[33424], 50.00th=[46400], 60.00th=[47449], 00:23:55.470 | 70.00th=[48497], 80.00th=[49546], 90.00th=[52691], 95.00th=[64226], 00:23:55.470 | 99.00th=[66847], 99.50th=[69731], 99.90th=[72877], 99.95th=[74974], 00:23:55.470 | 99.99th=[76022] 00:23:55.470 bw ( KiB/s): min=249344, max=972800, per=10.27%, avg=405469.50, stdev=166229.85, samples=20 00:23:55.470 iops : min= 974, max= 3800, avg=1583.80, stdev=649.32, samples=20 00:23:55.470 lat (msec) : 20=15.45%, 50=67.35%, 100=17.21% 00:23:55.470 cpu : usr=0.46%, sys=5.63%, ctx=3113, majf=0, minf=4097 00:23:55.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:55.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:55.470 issued rwts: total=15901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:55.470 job2: (groupid=0, jobs=1): err= 0: pid=2293851: Wed Jul 24 10:46:00 2024 00:23:55.470 read: IOPS=1795, BW=449MiB/s (471MB/s)(4517MiB/10061msec) 00:23:55.470 slat (usec): min=9, max=25765, avg=549.10, stdev=1732.22 00:23:55.470 clat (msec): min=9, max=138, avg=35.06, stdev=21.99 00:23:55.470 lat (msec): min=9, max=138, avg=35.60, stdev=22.36 00:23:55.470 clat percentiles (msec): 00:23:55.470 | 1.00th=[ 15], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 17], 00:23:55.470 | 30.00th=[ 17], 40.00th=[ 17], 50.00th=[ 31], 60.00th=[ 46], 00:23:55.470 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 80], 95.00th=[ 82], 00:23:55.470 | 99.00th=[ 86], 99.50th=[ 93], 99.90th=[ 122], 99.95th=[ 136], 00:23:55.470 | 99.99th=[ 138] 00:23:55.470 bw ( KiB/s): min=190464, max=1013760, per=11.67%, avg=460915.00, stdev=298543.00, samples=20 00:23:55.470 iops : min= 744, max= 3960, avg=1800.40, stdev=1166.22, samples=20 00:23:55.470 lat (msec) : 10=0.03%, 20=47.21%, 50=35.93%, 100=16.47%, 250=0.36% 00:23:55.470 cpu : usr=0.39%, sys=5.42%, ctx=3377, majf=0, minf=4097 00:23:55.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:23:55.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:55.470 issued rwts: total=18068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:55.470 job3: (groupid=0, jobs=1): err= 0: pid=2293858: Wed Jul 24 10:46:00 2024 00:23:55.470 read: IOPS=885, BW=221MiB/s (232MB/s)(2226MiB/10060msec) 00:23:55.470 slat (usec): min=12, max=21192, avg=1113.38, stdev=2730.38 00:23:55.470 clat (msec): min=11, max=133, avg=71.13, stdev=12.08 00:23:55.470 lat (msec): min=12, max=133, avg=72.24, stdev=12.47 00:23:55.470 clat percentiles (msec): 00:23:55.470 | 1.00th=[ 49], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 63], 00:23:55.470 | 30.00th=[ 65], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 80], 00:23:55.470 | 70.00th=[ 81], 80.00th=[ 82], 90.00th=[ 84], 95.00th=[ 85], 00:23:55.470 | 99.00th=[ 95], 99.50th=[ 99], 99.90th=[ 129], 99.95th=[ 133], 00:23:55.470 | 99.99th=[ 133] 00:23:55.470 bw ( KiB/s): min=192897, max=300455, per=5.73%, avg=226334.00, stdev=36190.11, samples=20 00:23:55.470 iops : min= 753, max= 1173, avg=884.00, stdev=141.34, samples=20 00:23:55.470 lat (msec) : 20=0.28%, 50=1.36%, 100=97.97%, 250=0.39% 00:23:55.470 cpu : usr=0.39%, sys=3.87%, ctx=1797, majf=0, minf=4097 00:23:55.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:55.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:55.470 issued rwts: total=8904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:55.470 job4: (groupid=0, jobs=1): err= 0: pid=2293862: Wed Jul 24 10:46:00 2024 00:23:55.470 read: IOPS=1066, BW=267MiB/s (279MB/s)(2676MiB/10040msec) 00:23:55.470 slat (usec): min=12, max=21487, avg=923.93, stdev=2347.81 00:23:55.470 clat (msec): min=11, max=102, avg=59.06, stdev=13.26 00:23:55.470 lat (msec): min=11, max=102, avg=59.98, stdev=13.61 00:23:55.470 clat percentiles (msec): 00:23:55.470 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 50], 00:23:55.470 | 30.00th=[ 51], 40.00th=[ 52], 50.00th=[ 53], 60.00th=[ 55], 00:23:55.470 | 70.00th=[ 65], 80.00th=[ 79], 90.00th=[ 82], 95.00th=[ 83], 00:23:55.470 | 99.00th=[ 88], 99.50th=[ 93], 99.90th=[ 100], 99.95th=[ 100], 00:23:55.470 | 99.99th=[ 103] 00:23:55.470 bw ( KiB/s): min=194048, max=326797, per=6.90%, avg=272482.90, stdev=53942.06, samples=20 00:23:55.470 iops : min= 758, max= 1276, avg=1064.25, stdev=210.74, samples=20 00:23:55.470 lat (msec) : 20=0.21%, 50=31.45%, 100=68.31%, 250=0.03% 00:23:55.470 cpu : usr=0.36%, sys=4.39%, ctx=2170, majf=0, minf=4097 00:23:55.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:23:55.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:55.470 issued rwts: total=10704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:55.470 job5: (groupid=0, jobs=1): err= 0: pid=2293876: Wed Jul 24 10:46:00 2024 00:23:55.470 read: IOPS=1427, BW=357MiB/s (374MB/s)(3584MiB/10042msec) 00:23:55.470 slat (usec): min=11, max=15138, avg=694.49, stdev=1685.18 00:23:55.470 clat (usec): min=11239, max=90490, avg=44096.77, stdev=10476.83 00:23:55.470 lat (usec): min=11431, max=90527, avg=44791.26, stdev=10710.45 00:23:55.470 clat percentiles (usec): 00:23:55.470 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31851], 20.00th=[32900], 00:23:55.470 | 30.00th=[33817], 40.00th=[36439], 50.00th=[47973], 60.00th=[49021], 00:23:55.470 | 70.00th=[49546], 80.00th=[51119], 90.00th=[54789], 95.00th=[64750], 00:23:55.470 | 99.00th=[67634], 99.50th=[70779], 99.90th=[78119], 99.95th=[88605], 00:23:55.470 | 99.99th=[90702] 00:23:55.470 bw ( KiB/s): min=246272, max=490538, per=9.25%, avg=365377.25, stdev=84398.43, samples=20 00:23:55.470 iops : min= 962, max= 1916, avg=1427.20, stdev=329.70, samples=20 00:23:55.470 lat (msec) : 20=0.21%, 50=72.98%, 100=26.81% 00:23:55.470 cpu : usr=0.43%, sys=5.58%, ctx=2736, majf=0, minf=4097 00:23:55.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:55.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:55.470 issued rwts: total=14335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:55.470 job6: (groupid=0, jobs=1): err= 0: pid=2293883: Wed Jul 24 10:46:00 2024 00:23:55.470 read: IOPS=943, BW=236MiB/s (247MB/s)(2373MiB/10060msec) 00:23:55.470 slat (usec): min=10, max=24069, avg=1034.30, stdev=2608.38 00:23:55.470 clat (msec): min=10, max=137, avg=66.74, stdev=15.78 00:23:55.470 lat (msec): min=10, max=152, avg=67.77, stdev=16.17 00:23:55.470 clat percentiles (msec): 00:23:55.470 | 1.00th=[ 42], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 49], 00:23:55.470 | 30.00th=[ 51], 40.00th=[ 65], 50.00th=[ 67], 60.00th=[ 80], 00:23:55.470 | 70.00th=[ 81], 80.00th=[ 82], 90.00th=[ 83], 95.00th=[ 86], 00:23:55.470 | 99.00th=[ 94], 99.50th=[ 99], 99.90th=[ 128], 99.95th=[ 134], 00:23:55.470 | 99.99th=[ 138] 00:23:55.470 bw ( KiB/s): min=189952, max=336896, per=6.11%, avg=241390.00, stdev=55958.45, samples=20 00:23:55.470 iops : min= 742, max= 1316, avg=942.85, stdev=218.58, samples=20 00:23:55.470 lat (msec) : 20=0.37%, 50=27.61%, 100=71.62%, 250=0.41% 00:23:55.470 cpu : usr=0.32%, sys=3.67%, ctx=2035, majf=0, minf=4097 00:23:55.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:23:55.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:55.470 issued rwts: total=9491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:55.470 job7: (groupid=0, jobs=1): err= 0: pid=2293888: Wed Jul 24 10:46:00 2024 00:23:55.470 read: IOPS=892, BW=223MiB/s (234MB/s)(2246MiB/10061msec) 00:23:55.470 slat (usec): min=15, max=44045, avg=1086.27, stdev=4035.63 00:23:55.470 clat (msec): min=9, max=140, avg=70.53, stdev=14.77 00:23:55.470 lat (msec): min=9, max=140, avg=71.61, stdev=15.46 00:23:55.470 clat percentiles (msec): 00:23:55.470 | 1.00th=[ 30], 5.00th=[ 33], 10.00th=[ 63], 20.00th=[ 65], 00:23:55.470 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 70], 60.00th=[ 80], 00:23:55.470 | 70.00th=[ 81], 80.00th=[ 82], 90.00th=[ 83], 95.00th=[ 85], 00:23:55.470 | 99.00th=[ 94], 99.50th=[ 107], 99.90th=[ 123], 99.95th=[ 125], 00:23:55.470 | 99.99th=[ 140] 00:23:55.470 bw ( KiB/s): min=189952, max=427863, per=5.78%, avg=228394.35, stdev=52238.18, samples=20 00:23:55.470 iops : min= 742, max= 1671, avg=892.10, stdev=204.01, samples=20 00:23:55.470 lat (msec) : 10=0.04%, 20=0.58%, 50=7.35%, 100=91.39%, 250=0.63% 00:23:55.470 cpu : usr=0.28%, sys=3.71%, ctx=1878, majf=0, minf=4097 00:23:55.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:55.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:55.470 issued rwts: total=8983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:55.470 job8: (groupid=0, jobs=1): err= 0: pid=2293904: Wed Jul 24 10:46:00 2024 00:23:55.470 read: IOPS=884, BW=221MiB/s (232MB/s)(2224MiB/10060msec) 00:23:55.470 slat (usec): min=15, max=32661, avg=1120.92, stdev=3172.38 00:23:55.470 clat (msec): min=11, max=140, avg=71.18, stdev=12.43 00:23:55.470 lat (msec): min=11, max=140, avg=72.30, stdev=12.91 00:23:55.470 clat percentiles (msec): 00:23:55.470 | 1.00th=[ 49], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 63], 00:23:55.470 | 30.00th=[ 65], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 80], 00:23:55.470 | 70.00th=[ 81], 80.00th=[ 82], 90.00th=[ 83], 95.00th=[ 85], 00:23:55.471 | 99.00th=[ 95], 99.50th=[ 112], 99.90th=[ 134], 99.95th=[ 136], 00:23:55.471 | 99.99th=[ 140] 00:23:55.471 bw ( KiB/s): min=191488, max=298922, per=5.73%, avg=226154.80, stdev=36642.32, samples=20 00:23:55.471 iops : min= 748, max= 1167, avg=883.30, stdev=143.13, samples=20 00:23:55.471 lat (msec) : 20=0.29%, 50=1.57%, 100=97.44%, 250=0.70% 00:23:55.471 cpu : usr=0.24%, sys=3.88%, ctx=1804, majf=0, minf=4097 00:23:55.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:55.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:55.471 issued rwts: total=8897,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:55.471 job9: (groupid=0, jobs=1): err= 0: pid=2293907: Wed Jul 24 10:46:00 2024 00:23:55.471 read: IOPS=3048, BW=762MiB/s (799MB/s)(7632MiB/10014msec) 00:23:55.471 slat (usec): min=8, max=13708, avg=326.08, stdev=864.47 00:23:55.471 clat (usec): min=10910, max=61562, avg=20652.80, stdev=10966.85 00:23:55.471 lat (usec): min=11105, max=62794, avg=20978.89, stdev=11149.42 00:23:55.471 clat percentiles (usec): 00:23:55.471 | 1.00th=[13566], 5.00th=[13960], 10.00th=[14353], 20.00th=[15270], 00:23:55.471 | 30.00th=[15533], 40.00th=[15664], 50.00th=[15795], 60.00th=[16057], 00:23:55.471 | 70.00th=[16319], 80.00th=[18482], 90.00th=[46400], 95.00th=[48497], 00:23:55.471 | 99.00th=[51643], 99.50th=[53740], 99.90th=[56886], 99.95th=[57934], 00:23:55.471 | 99.99th=[58983] 00:23:55.471 bw ( KiB/s): min=323584, max=1049088, per=19.75%, avg=780100.20, stdev=317162.27, samples=20 00:23:55.471 iops : min= 1264, max= 4098, avg=3047.20, stdev=1238.97, samples=20 00:23:55.471 lat (msec) : 20=80.33%, 50=17.38%, 100=2.29% 00:23:55.471 cpu : usr=0.48%, sys=7.53%, ctx=5450, majf=0, minf=4097 00:23:55.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:55.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:55.471 issued rwts: total=30527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:55.471 job10: (groupid=0, jobs=1): err= 0: pid=2293912: Wed Jul 24 10:46:00 2024 00:23:55.471 read: IOPS=1428, BW=357MiB/s (374MB/s)(3585MiB/10042msec) 00:23:55.471 slat (usec): min=10, max=18487, avg=694.74, stdev=1690.92 00:23:55.471 clat (usec): min=10778, max=83932, avg=44083.28, stdev=10470.15 00:23:55.471 lat (usec): min=11046, max=94726, avg=44778.02, stdev=10707.46 00:23:55.471 clat percentiles (usec): 00:23:55.471 | 1.00th=[30278], 5.00th=[31327], 10.00th=[31851], 20.00th=[33162], 00:23:55.471 | 30.00th=[33817], 40.00th=[36439], 50.00th=[47973], 60.00th=[48497], 00:23:55.471 | 70.00th=[49546], 80.00th=[51119], 90.00th=[54264], 95.00th=[64750], 00:23:55.471 | 99.00th=[68682], 99.50th=[71828], 99.90th=[78119], 99.95th=[81265], 00:23:55.471 | 99.99th=[83362] 00:23:55.471 bw ( KiB/s): min=246272, max=491008, per=9.25%, avg=365449.40, stdev=84388.63, samples=20 00:23:55.471 iops : min= 962, max= 1918, avg=1427.50, stdev=329.64, samples=20 00:23:55.471 lat (msec) : 20=0.22%, 50=73.01%, 100=26.77% 00:23:55.471 cpu : usr=0.46%, sys=5.54%, ctx=2790, majf=0, minf=4097 00:23:55.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:55.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:55.471 issued rwts: total=14340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:55.471 00:23:55.471 Run status group 0 (all jobs): 00:23:55.471 READ: bw=3857MiB/s (4044MB/s), 221MiB/s-762MiB/s (232MB/s-799MB/s), io=37.9GiB (40.7GB), run=10014-10061msec 00:23:55.471 00:23:55.471 Disk stats (read/write): 00:23:55.471 nvme0n1: ios=30041/0, merge=0/0, ticks=1235043/0, in_queue=1235043, util=97.90% 00:23:55.471 nvme10n1: ios=31705/0, merge=0/0, ticks=1235681/0, in_queue=1235681, util=98.02% 00:23:55.471 nvme1n1: ios=36033/0, merge=0/0, ticks=1232807/0, in_queue=1232807, util=98.22% 00:23:55.471 nvme2n1: ios=17681/0, merge=0/0, ticks=1234143/0, in_queue=1234143, util=98.29% 00:23:55.471 nvme3n1: ios=21296/0, merge=0/0, ticks=1236439/0, in_queue=1236439, util=98.34% 00:23:55.471 nvme4n1: ios=28562/0, merge=0/0, ticks=1234249/0, in_queue=1234249, util=98.57% 00:23:55.471 nvme5n1: ios=18871/0, merge=0/0, ticks=1234658/0, in_queue=1234658, util=98.70% 00:23:55.471 nvme6n1: ios=17848/0, merge=0/0, ticks=1237326/0, in_queue=1237326, util=98.77% 00:23:55.471 nvme7n1: ios=17674/0, merge=0/0, ticks=1234698/0, in_queue=1234698, util=99.02% 00:23:55.471 nvme8n1: ios=60944/0, merge=0/0, ticks=1233335/0, in_queue=1233335, util=99.15% 00:23:55.471 nvme9n1: ios=28555/0, merge=0/0, ticks=1234208/0, in_queue=1234208, util=99.26% 00:23:55.471 10:46:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:23:55.471 [global] 00:23:55.471 thread=1 00:23:55.471 invalidate=1 00:23:55.471 rw=randwrite 00:23:55.471 time_based=1 00:23:55.471 runtime=10 00:23:55.471 ioengine=libaio 00:23:55.471 direct=1 00:23:55.471 bs=262144 00:23:55.471 iodepth=64 00:23:55.471 norandommap=1 00:23:55.471 numjobs=1 00:23:55.471 00:23:55.471 [job0] 00:23:55.471 filename=/dev/nvme0n1 00:23:55.471 [job1] 00:23:55.471 filename=/dev/nvme10n1 00:23:55.471 [job2] 00:23:55.471 filename=/dev/nvme1n1 00:23:55.471 [job3] 00:23:55.471 filename=/dev/nvme2n1 00:23:55.471 [job4] 00:23:55.471 filename=/dev/nvme3n1 00:23:55.471 [job5] 00:23:55.471 filename=/dev/nvme4n1 00:23:55.471 [job6] 00:23:55.471 filename=/dev/nvme5n1 00:23:55.471 [job7] 00:23:55.471 filename=/dev/nvme6n1 00:23:55.471 [job8] 00:23:55.471 filename=/dev/nvme7n1 00:23:55.471 [job9] 00:23:55.471 filename=/dev/nvme8n1 00:23:55.471 [job10] 00:23:55.471 filename=/dev/nvme9n1 00:23:55.471 Could not set queue depth (nvme0n1) 00:23:55.471 Could not set queue depth (nvme10n1) 00:23:55.471 Could not set queue depth (nvme1n1) 00:23:55.471 Could not set queue depth (nvme2n1) 00:23:55.471 Could not set queue depth (nvme3n1) 00:23:55.471 Could not set queue depth (nvme4n1) 00:23:55.471 Could not set queue depth (nvme5n1) 00:23:55.471 Could not set queue depth (nvme6n1) 00:23:55.471 Could not set queue depth (nvme7n1) 00:23:55.471 Could not set queue depth (nvme8n1) 00:23:55.471 Could not set queue depth (nvme9n1) 00:23:55.471 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.471 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.471 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.471 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.471 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.471 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.471 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.471 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.471 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.471 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.471 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.471 fio-3.35 00:23:55.471 Starting 11 threads 00:24:05.449 00:24:05.449 job0: (groupid=0, jobs=1): err= 0: pid=2295549: Wed Jul 24 10:46:11 2024 00:24:05.449 write: IOPS=784, BW=196MiB/s (206MB/s)(1971MiB/10044msec); 0 zone resets 00:24:05.449 slat (usec): min=22, max=65293, avg=1252.47, stdev=2434.80 00:24:05.449 clat (msec): min=6, max=167, avg=80.26, stdev=21.97 00:24:05.449 lat (msec): min=6, max=167, avg=81.51, stdev=22.27 00:24:05.449 clat percentiles (msec): 00:24:05.449 | 1.00th=[ 35], 5.00th=[ 38], 10.00th=[ 53], 20.00th=[ 57], 00:24:05.449 | 30.00th=[ 69], 40.00th=[ 85], 50.00th=[ 91], 60.00th=[ 93], 00:24:05.449 | 70.00th=[ 95], 80.00th=[ 96], 90.00th=[ 99], 95.00th=[ 107], 00:24:05.449 | 99.00th=[ 126], 99.50th=[ 128], 99.90th=[ 140], 99.95th=[ 140], 00:24:05.449 | 99.99th=[ 167] 00:24:05.449 bw ( KiB/s): min=139264, max=358912, per=5.75%, avg=200192.00, stdev=57200.42, samples=20 00:24:05.449 iops : min= 544, max= 1402, avg=782.00, stdev=223.44, samples=20 00:24:05.449 lat (msec) : 10=0.10%, 20=0.20%, 50=8.71%, 100=84.28%, 250=6.70% 00:24:05.449 cpu : usr=1.75%, sys=2.72%, ctx=2014, majf=0, minf=83 00:24:05.449 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:05.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:05.449 issued rwts: total=0,7883,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.449 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.449 job1: (groupid=0, jobs=1): err= 0: pid=2295561: Wed Jul 24 10:46:11 2024 00:24:05.449 write: IOPS=2519, BW=630MiB/s (661MB/s)(6321MiB/10034msec); 0 zone resets 00:24:05.449 slat (usec): min=13, max=9074, avg=389.47, stdev=802.51 00:24:05.449 clat (usec): min=9197, max=68537, avg=25001.60, stdev=9624.91 00:24:05.449 lat (usec): min=9251, max=68583, avg=25391.07, stdev=9762.81 00:24:05.449 clat percentiles (usec): 00:24:05.449 | 1.00th=[16450], 5.00th=[17171], 10.00th=[17433], 20.00th=[17695], 00:24:05.449 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18482], 60.00th=[19268], 00:24:05.449 | 70.00th=[35390], 80.00th=[36963], 90.00th=[38011], 95.00th=[38536], 00:24:05.449 | 99.00th=[52167], 99.50th=[56361], 99.90th=[60031], 99.95th=[62653], 00:24:05.449 | 99.99th=[68682] 00:24:05.449 bw ( KiB/s): min=406528, max=900608, per=18.55%, avg=645658.55, stdev=231122.94, samples=20 00:24:05.449 iops : min= 1588, max= 3518, avg=2522.10, stdev=902.83, samples=20 00:24:05.449 lat (msec) : 10=0.01%, 20=63.88%, 50=35.10%, 100=1.02% 00:24:05.449 cpu : usr=4.26%, sys=5.23%, ctx=5474, majf=0, minf=139 00:24:05.449 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:05.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:05.449 issued rwts: total=0,25282,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.449 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.449 job2: (groupid=0, jobs=1): err= 0: pid=2295562: Wed Jul 24 10:46:11 2024 00:24:05.449 write: IOPS=1021, BW=255MiB/s (268MB/s)(2565MiB/10043msec); 0 zone resets 00:24:05.449 slat (usec): min=21, max=15340, avg=970.42, stdev=1748.96 00:24:05.449 clat (usec): min=18755, max=97376, avg=61655.96, stdev=9437.77 00:24:05.449 lat (usec): min=18788, max=97422, avg=62626.39, stdev=9568.23 00:24:05.449 clat percentiles (usec): 00:24:05.449 | 1.00th=[37487], 5.00th=[52167], 10.00th=[53216], 20.00th=[54789], 00:24:05.449 | 30.00th=[55837], 40.00th=[56886], 50.00th=[57410], 60.00th=[59507], 00:24:05.449 | 70.00th=[69731], 80.00th=[72877], 90.00th=[73925], 95.00th=[76022], 00:24:05.449 | 99.00th=[79168], 99.50th=[80217], 99.90th=[88605], 99.95th=[95945], 00:24:05.449 | 99.99th=[96994] 00:24:05.449 bw ( KiB/s): min=218624, max=333312, per=7.50%, avg=261067.85, stdev=34708.02, samples=20 00:24:05.449 iops : min= 854, max= 1302, avg=1019.75, stdev=135.60, samples=20 00:24:05.449 lat (msec) : 20=0.04%, 50=3.51%, 100=96.45% 00:24:05.449 cpu : usr=2.64%, sys=3.52%, ctx=2555, majf=0, minf=12 00:24:05.449 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:05.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:05.449 issued rwts: total=0,10260,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.449 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.449 job3: (groupid=0, jobs=1): err= 0: pid=2295563: Wed Jul 24 10:46:11 2024 00:24:05.449 write: IOPS=1298, BW=325MiB/s (340MB/s)(3257MiB/10030msec); 0 zone resets 00:24:05.449 slat (usec): min=20, max=15374, avg=760.42, stdev=1483.16 00:24:05.449 clat (usec): min=4554, max=82542, avg=48505.71, stdev=15994.33 00:24:05.449 lat (usec): min=6041, max=84411, avg=49266.13, stdev=16239.26 00:24:05.449 clat percentiles (usec): 00:24:05.449 | 1.00th=[30278], 5.00th=[34866], 10.00th=[35914], 20.00th=[36439], 00:24:05.449 | 30.00th=[36963], 40.00th=[37487], 50.00th=[38011], 60.00th=[39584], 00:24:05.449 | 70.00th=[57934], 80.00th=[70779], 90.00th=[72877], 95.00th=[74974], 00:24:05.449 | 99.00th=[78119], 99.50th=[79168], 99.90th=[81265], 99.95th=[82314], 00:24:05.449 | 99.99th=[82314] 00:24:05.449 bw ( KiB/s): min=218624, max=437760, per=9.54%, avg=331877.45, stdev=99963.64, samples=20 00:24:05.449 iops : min= 854, max= 1710, avg=1296.35, stdev=390.52, samples=20 00:24:05.449 lat (msec) : 10=0.08%, 20=0.43%, 50=63.61%, 100=35.88% 00:24:05.449 cpu : usr=2.66%, sys=3.34%, ctx=3201, majf=0, minf=141 00:24:05.449 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:24:05.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:05.449 issued rwts: total=0,13026,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.450 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.450 job4: (groupid=0, jobs=1): err= 0: pid=2295564: Wed Jul 24 10:46:11 2024 00:24:05.450 write: IOPS=709, BW=177MiB/s (186MB/s)(1784MiB/10055msec); 0 zone resets 00:24:05.450 slat (usec): min=24, max=17334, avg=1397.01, stdev=2490.50 00:24:05.450 clat (msec): min=16, max=139, avg=88.78, stdev=12.65 00:24:05.450 lat (msec): min=16, max=145, avg=90.18, stdev=12.73 00:24:05.450 clat percentiles (msec): 00:24:05.450 | 1.00th=[ 59], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 77], 00:24:05.450 | 30.00th=[ 79], 40.00th=[ 90], 50.00th=[ 93], 60.00th=[ 94], 00:24:05.450 | 70.00th=[ 95], 80.00th=[ 96], 90.00th=[ 100], 95.00th=[ 108], 00:24:05.450 | 99.00th=[ 123], 99.50th=[ 129], 99.90th=[ 133], 99.95th=[ 138], 00:24:05.450 | 99.99th=[ 140] 00:24:05.450 bw ( KiB/s): min=138752, max=220672, per=5.20%, avg=181017.60, stdev=22284.45, samples=20 00:24:05.450 iops : min= 542, max= 862, avg=707.10, stdev=87.05, samples=20 00:24:05.450 lat (msec) : 20=0.11%, 50=0.38%, 100=91.14%, 250=8.37% 00:24:05.450 cpu : usr=1.66%, sys=2.86%, ctx=1805, majf=0, minf=71 00:24:05.450 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:05.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:05.450 issued rwts: total=0,7134,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.450 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.450 job5: (groupid=0, jobs=1): err= 0: pid=2295565: Wed Jul 24 10:46:11 2024 00:24:05.450 write: IOPS=991, BW=248MiB/s (260MB/s)(2489MiB/10043msec); 0 zone resets 00:24:05.450 slat (usec): min=22, max=59209, avg=984.94, stdev=1914.01 00:24:05.450 clat (msec): min=9, max=182, avg=63.55, stdev=11.99 00:24:05.450 lat (msec): min=9, max=182, avg=64.53, stdev=12.14 00:24:05.450 clat percentiles (msec): 00:24:05.450 | 1.00th=[ 52], 5.00th=[ 54], 10.00th=[ 54], 20.00th=[ 56], 00:24:05.450 | 30.00th=[ 57], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 64], 00:24:05.450 | 70.00th=[ 71], 80.00th=[ 73], 90.00th=[ 75], 95.00th=[ 78], 00:24:05.450 | 99.00th=[ 124], 99.50th=[ 129], 99.90th=[ 138], 99.95th=[ 144], 00:24:05.450 | 99.99th=[ 182] 00:24:05.450 bw ( KiB/s): min=207360, max=290304, per=7.28%, avg=253286.40, stdev=33174.36, samples=20 00:24:05.450 iops : min= 810, max= 1134, avg=989.40, stdev=129.59, samples=20 00:24:05.450 lat (msec) : 10=0.04%, 20=0.04%, 50=0.41%, 100=97.68%, 250=1.83% 00:24:05.450 cpu : usr=2.53%, sys=3.58%, ctx=2544, majf=0, minf=138 00:24:05.450 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:05.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:05.450 issued rwts: total=0,9957,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.450 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.450 job6: (groupid=0, jobs=1): err= 0: pid=2295566: Wed Jul 24 10:46:11 2024 00:24:05.450 write: IOPS=710, BW=178MiB/s (186MB/s)(1786MiB/10055msec); 0 zone resets 00:24:05.450 slat (usec): min=21, max=18771, avg=1387.07, stdev=2482.92 00:24:05.450 clat (msec): min=21, max=136, avg=88.66, stdev=12.39 00:24:05.450 lat (msec): min=21, max=147, avg=90.05, stdev=12.51 00:24:05.450 clat percentiles (msec): 00:24:05.450 | 1.00th=[ 59], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 77], 00:24:05.450 | 30.00th=[ 79], 40.00th=[ 90], 50.00th=[ 93], 60.00th=[ 94], 00:24:05.450 | 70.00th=[ 95], 80.00th=[ 96], 90.00th=[ 100], 95.00th=[ 107], 00:24:05.450 | 99.00th=[ 123], 99.50th=[ 126], 99.90th=[ 131], 99.95th=[ 133], 00:24:05.450 | 99.99th=[ 138] 00:24:05.450 bw ( KiB/s): min=139776, max=223744, per=5.21%, avg=181291.65, stdev=22552.04, samples=20 00:24:05.450 iops : min= 546, max= 874, avg=708.15, stdev=88.09, samples=20 00:24:05.450 lat (msec) : 50=0.59%, 100=91.18%, 250=8.23% 00:24:05.450 cpu : usr=1.78%, sys=2.76%, ctx=1833, majf=0, minf=201 00:24:05.450 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:05.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:05.450 issued rwts: total=0,7144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.450 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.450 job7: (groupid=0, jobs=1): err= 0: pid=2295567: Wed Jul 24 10:46:11 2024 00:24:05.450 write: IOPS=907, BW=227MiB/s (238MB/s)(2275MiB/10030msec); 0 zone resets 00:24:05.450 slat (usec): min=20, max=21664, avg=1082.36, stdev=2168.70 00:24:05.450 clat (msec): min=16, max=143, avg=69.44, stdev=29.07 00:24:05.450 lat (msec): min=16, max=143, avg=70.52, stdev=29.53 00:24:05.450 clat percentiles (msec): 00:24:05.450 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 37], 00:24:05.450 | 30.00th=[ 39], 40.00th=[ 41], 50.00th=[ 89], 60.00th=[ 92], 00:24:05.450 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 99], 95.00th=[ 103], 00:24:05.450 | 99.00th=[ 122], 99.50th=[ 126], 99.90th=[ 133], 99.95th=[ 136], 00:24:05.450 | 99.99th=[ 144] 00:24:05.450 bw ( KiB/s): min=141824, max=433152, per=6.65%, avg=231347.20, stdev=110927.49, samples=20 00:24:05.450 iops : min= 554, max= 1692, avg=903.70, stdev=433.31, samples=20 00:24:05.450 lat (msec) : 20=0.11%, 50=42.04%, 100=51.41%, 250=6.44% 00:24:05.450 cpu : usr=2.08%, sys=2.98%, ctx=2315, majf=0, minf=336 00:24:05.450 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:05.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:05.450 issued rwts: total=0,9100,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.450 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.450 job8: (groupid=0, jobs=1): err= 0: pid=2295568: Wed Jul 24 10:46:11 2024 00:24:05.450 write: IOPS=710, BW=178MiB/s (186MB/s)(1787MiB/10058msec); 0 zone resets 00:24:05.450 slat (usec): min=24, max=19420, avg=1376.81, stdev=2496.96 00:24:05.450 clat (msec): min=2, max=140, avg=88.66, stdev=13.23 00:24:05.450 lat (msec): min=2, max=141, avg=90.04, stdev=13.42 00:24:05.450 clat percentiles (msec): 00:24:05.450 | 1.00th=[ 52], 5.00th=[ 71], 10.00th=[ 74], 20.00th=[ 77], 00:24:05.450 | 30.00th=[ 79], 40.00th=[ 90], 50.00th=[ 93], 60.00th=[ 94], 00:24:05.450 | 70.00th=[ 95], 80.00th=[ 97], 90.00th=[ 100], 95.00th=[ 107], 00:24:05.450 | 99.00th=[ 125], 99.50th=[ 127], 99.90th=[ 134], 99.95th=[ 140], 00:24:05.450 | 99.99th=[ 140] 00:24:05.450 bw ( KiB/s): min=141824, max=225280, per=5.21%, avg=181350.40, stdev=22359.93, samples=20 00:24:05.450 iops : min= 554, max= 880, avg=708.40, stdev=87.34, samples=20 00:24:05.450 lat (msec) : 4=0.04%, 10=0.11%, 20=0.11%, 50=0.64%, 100=90.63% 00:24:05.450 lat (msec) : 250=8.47% 00:24:05.450 cpu : usr=1.93%, sys=2.67%, ctx=1852, majf=0, minf=75 00:24:05.450 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:05.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:05.450 issued rwts: total=0,7147,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.450 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.450 job9: (groupid=0, jobs=1): err= 0: pid=2295569: Wed Jul 24 10:46:11 2024 00:24:05.450 write: IOPS=983, BW=246MiB/s (258MB/s)(2474MiB/10055msec); 0 zone resets 00:24:05.450 slat (usec): min=18, max=59137, avg=985.28, stdev=1977.20 00:24:05.450 clat (msec): min=6, max=146, avg=64.03, stdev=18.20 00:24:05.450 lat (msec): min=6, max=149, avg=65.02, stdev=18.45 00:24:05.450 clat percentiles (msec): 00:24:05.450 | 1.00th=[ 18], 5.00th=[ 20], 10.00th=[ 53], 20.00th=[ 56], 00:24:05.450 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 69], 60.00th=[ 71], 00:24:05.450 | 70.00th=[ 73], 80.00th=[ 75], 90.00th=[ 78], 95.00th=[ 83], 00:24:05.450 | 99.00th=[ 117], 99.50th=[ 125], 99.90th=[ 129], 99.95th=[ 133], 00:24:05.450 | 99.99th=[ 146] 00:24:05.450 bw ( KiB/s): min=170496, max=428889, per=7.23%, avg=251716.45, stdev=54873.80, samples=20 00:24:05.450 iops : min= 666, max= 1675, avg=983.25, stdev=214.29, samples=20 00:24:05.450 lat (msec) : 10=0.08%, 20=6.31%, 50=1.68%, 100=87.86%, 250=4.07% 00:24:05.450 cpu : usr=2.29%, sys=3.58%, ctx=2502, majf=0, minf=24 00:24:05.450 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:05.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:05.450 issued rwts: total=0,9894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.450 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.450 job10: (groupid=0, jobs=1): err= 0: pid=2295570: Wed Jul 24 10:46:11 2024 00:24:05.450 write: IOPS=2979, BW=745MiB/s (781MB/s)(7474MiB/10034msec); 0 zone resets 00:24:05.450 slat (usec): min=15, max=9456, avg=330.96, stdev=685.61 00:24:05.450 clat (usec): min=562, max=78852, avg=21141.81, stdev=7704.09 00:24:05.450 lat (usec): min=605, max=78919, avg=21472.77, stdev=7825.17 00:24:05.450 clat percentiles (usec): 00:24:05.450 | 1.00th=[16188], 5.00th=[17171], 10.00th=[17433], 20.00th=[17957], 00:24:05.450 | 30.00th=[18220], 40.00th=[18220], 50.00th=[18482], 60.00th=[18744], 00:24:05.450 | 70.00th=[19006], 80.00th=[19530], 90.00th=[35914], 95.00th=[37487], 00:24:05.450 | 99.00th=[44303], 99.50th=[68682], 99.90th=[72877], 99.95th=[73925], 00:24:05.450 | 99.99th=[77071] 00:24:05.450 bw ( KiB/s): min=433664, max=883200, per=21.95%, avg=763750.40, stdev=181638.10, samples=20 00:24:05.450 iops : min= 1694, max= 3450, avg=2983.40, stdev=709.52, samples=20 00:24:05.450 lat (usec) : 750=0.01% 00:24:05.450 lat (msec) : 2=0.05%, 4=0.15%, 10=0.30%, 20=83.78%, 50=14.91% 00:24:05.450 lat (msec) : 100=0.79% 00:24:05.450 cpu : usr=4.55%, sys=6.09%, ctx=6835, majf=0, minf=335 00:24:05.450 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:05.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:05.450 issued rwts: total=0,29897,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.450 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.450 00:24:05.450 Run status group 0 (all jobs): 00:24:05.450 WRITE: bw=3398MiB/s (3563MB/s), 177MiB/s-745MiB/s (186MB/s-781MB/s), io=33.4GiB (35.8GB), run=10030-10058msec 00:24:05.450 00:24:05.450 Disk stats (read/write): 00:24:05.451 nvme0n1: ios=49/15612, merge=0/0, ticks=19/1226969, in_queue=1226988, util=97.82% 00:24:05.451 nvme10n1: ios=0/50381, merge=0/0, ticks=0/1238978, in_queue=1238978, util=97.87% 00:24:05.451 nvme1n1: ios=0/20367, merge=0/0, ticks=0/1230199, in_queue=1230199, util=98.08% 00:24:05.451 nvme2n1: ios=0/25883, merge=0/0, ticks=0/1232683, in_queue=1232683, util=98.18% 00:24:05.451 nvme3n1: ios=0/14125, merge=0/0, ticks=0/1223596, in_queue=1223596, util=98.23% 00:24:05.451 nvme4n1: ios=0/19756, merge=0/0, ticks=0/1229305, in_queue=1229305, util=98.44% 00:24:05.451 nvme5n1: ios=0/14142, merge=0/0, ticks=0/1226157, in_queue=1226157, util=98.55% 00:24:05.451 nvme6n1: ios=0/18031, merge=0/0, ticks=0/1227847, in_queue=1227847, util=98.63% 00:24:05.451 nvme7n1: ios=0/14149, merge=0/0, ticks=0/1226141, in_queue=1226141, util=98.90% 00:24:05.451 nvme8n1: ios=0/19644, merge=0/0, ticks=0/1228759, in_queue=1228759, util=99.01% 00:24:05.451 nvme9n1: ios=0/59627, merge=0/0, ticks=0/1230446, in_queue=1230446, util=99.08% 00:24:05.451 10:46:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:24:05.451 10:46:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:24:05.451 10:46:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:05.451 10:46:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:05.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:05.451 10:46:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:05.451 10:46:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:05.451 10:46:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:05.451 10:46:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:24:05.451 10:46:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:05.451 10:46:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:24:05.451 10:46:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:05.451 10:46:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:05.451 10:46:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.451 10:46:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:05.451 10:46:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.451 10:46:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:05.451 10:46:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:06.386 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:06.386 10:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:06.386 10:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:06.386 10:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:06.386 10:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:24:06.386 10:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:06.386 10:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:24:06.386 10:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:06.386 10:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:06.386 10:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.386 10:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.386 10:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.386 10:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:06.386 10:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:07.320 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:07.578 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:07.578 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:07.578 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:07.578 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:24:07.578 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:07.578 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:24:07.578 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:07.578 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:07.578 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.578 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.579 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.579 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:07.579 10:46:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:08.514 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:08.514 10:46:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:08.514 10:46:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:08.514 10:46:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:08.514 10:46:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:24:08.514 10:46:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:08.514 10:46:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:24:08.514 10:46:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:08.514 10:46:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:08.514 10:46:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.514 10:46:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.514 10:46:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.514 10:46:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:08.514 10:46:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:09.450 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:09.450 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:09.450 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:09.450 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:09.450 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:24:09.450 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:09.450 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:24:09.450 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:09.450 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:09.450 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.450 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:09.450 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.450 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.450 10:46:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:10.387 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:10.387 10:46:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:10.387 10:46:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:10.387 10:46:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:10.387 10:46:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:24:10.387 10:46:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:24:10.387 10:46:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:10.387 10:46:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:10.387 10:46:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:10.387 10:46:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.387 10:46:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.387 10:46:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.387 10:46:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.387 10:46:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:11.324 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:11.324 10:46:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:11.324 10:46:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:11.324 10:46:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:11.324 10:46:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:24:11.324 10:46:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:11.324 10:46:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:24:11.324 10:46:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:11.324 10:46:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:11.324 10:46:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.324 10:46:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.324 10:46:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.324 10:46:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:11.324 10:46:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:12.262 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:12.262 10:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:12.262 10:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:12.262 10:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:12.262 10:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:24:12.523 10:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:12.523 10:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:24:12.523 10:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:12.523 10:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:12.523 10:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.523 10:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:12.523 10:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.523 10:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:12.523 10:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:13.460 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:13.460 10:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:13.460 10:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:13.460 10:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:13.460 10:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:24:13.460 10:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:24:13.460 10:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:13.460 10:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:13.460 10:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:13.460 10:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.460 10:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.460 10:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.460 10:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:13.460 10:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:14.397 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:14.397 10:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:14.397 10:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:14.397 10:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:14.397 10:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:24:14.397 10:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:24:14.397 10:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:14.397 10:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:14.397 10:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:14.397 10:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.397 10:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:14.397 10:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.398 10:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:14.398 10:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:15.343 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:15.343 rmmod nvme_rdma 00:24:15.343 rmmod nvme_fabrics 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 2287793 ']' 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 2287793 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 2287793 ']' 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 2287793 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2287793 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2287793' 00:24:15.343 killing process with pid 2287793 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 2287793 00:24:15.343 10:46:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 2287793 00:24:15.915 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:15.915 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:15.915 00:24:15.915 real 1m12.479s 00:24:15.915 user 4m42.427s 00:24:15.915 sys 0m16.168s 00:24:15.915 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:15.915 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:15.915 ************************************ 00:24:15.915 END TEST nvmf_multiconnection 00:24:15.915 ************************************ 00:24:15.915 10:46:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:24:15.915 10:46:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:15.915 10:46:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:15.915 10:46:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:15.915 ************************************ 00:24:15.915 START TEST nvmf_initiator_timeout 00:24:15.915 ************************************ 00:24:15.915 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:24:16.175 * Looking for test storage... 00:24:16.175 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:16.175 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.176 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.176 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.176 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:16.176 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:16.176 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:24:16.176 10:46:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:24:21.498 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:24:21.498 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:24:21.498 Found net devices under 0000:da:00.0: mlx_0_0 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:24:21.498 Found net devices under 0000:da:00.1: mlx_0_1 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # rdma_device_init 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # uname 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:21.498 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:21.499 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:21.499 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:24:21.499 altname enp218s0f0np0 00:24:21.499 altname ens818f0np0 00:24:21.499 inet 192.168.100.8/24 scope global mlx_0_0 00:24:21.499 valid_lft forever preferred_lft forever 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:21.499 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:21.499 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:24:21.499 altname enp218s0f1np1 00:24:21.499 altname ens818f1np1 00:24:21.499 inet 192.168.100.9/24 scope global mlx_0_1 00:24:21.499 valid_lft forever preferred_lft forever 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:21.499 192.168.100.9' 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:21.499 192.168.100.9' 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # head -n 1 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:21.499 192.168.100.9' 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # tail -n +2 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # head -n 1 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:21.499 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=2301925 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 2301925 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 2301925 ']' 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:21.500 [2024-07-24 10:46:28.575148] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:24:21.500 [2024-07-24 10:46:28.575192] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.500 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.500 [2024-07-24 10:46:28.631282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:21.500 [2024-07-24 10:46:28.673076] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.500 [2024-07-24 10:46:28.673117] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.500 [2024-07-24 10:46:28.673124] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.500 [2024-07-24 10:46:28.673129] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.500 [2024-07-24 10:46:28.673134] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.500 [2024-07-24 10:46:28.673179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.500 [2024-07-24 10:46:28.673274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.500 [2024-07-24 10:46:28.673363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:21.500 [2024-07-24 10:46:28.673364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:21.500 Malloc0 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:21.500 Delay0 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.500 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:21.500 [2024-07-24 10:46:28.871188] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdc37e0/0xefc280) succeed. 00:24:21.500 [2024-07-24 10:46:28.880548] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdc3c30/0xddc100) succeed. 00:24:21.758 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.758 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:21.758 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.758 10:46:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:21.758 10:46:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.758 10:46:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:21.758 10:46:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.758 10:46:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:21.758 10:46:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.758 10:46:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:21.758 10:46:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.758 10:46:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:21.758 [2024-07-24 10:46:29.022697] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:21.758 10:46:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.758 10:46:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:24:22.694 10:46:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:22.694 10:46:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:24:22.694 10:46:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:22.694 10:46:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:22.694 10:46:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:24:24.595 10:46:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:24.595 10:46:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:24.595 10:46:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:24:24.595 10:46:32 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:24.595 10:46:32 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:24.595 10:46:32 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:24:24.595 10:46:32 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2302501 00:24:24.595 10:46:32 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:24.595 10:46:32 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:24.595 [global] 00:24:24.595 thread=1 00:24:24.595 invalidate=1 00:24:24.595 rw=write 00:24:24.595 time_based=1 00:24:24.595 runtime=60 00:24:24.595 ioengine=libaio 00:24:24.595 direct=1 00:24:24.595 bs=4096 00:24:24.595 iodepth=1 00:24:24.595 norandommap=0 00:24:24.595 numjobs=1 00:24:24.595 00:24:24.595 verify_dump=1 00:24:24.595 verify_backlog=512 00:24:24.596 verify_state_save=0 00:24:24.596 do_verify=1 00:24:24.596 verify=crc32c-intel 00:24:24.596 [job0] 00:24:24.596 filename=/dev/nvme0n1 00:24:24.854 Could not set queue depth (nvme0n1) 00:24:24.854 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:24.854 fio-3.35 00:24:24.854 Starting 1 thread 00:24:28.138 10:46:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:24:28.138 10:46:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.138 10:46:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:28.138 true 00:24:28.138 10:46:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.138 10:46:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:24:28.138 10:46:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.138 10:46:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:28.138 true 00:24:28.138 10:46:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.138 10:46:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:24:28.139 10:46:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.139 10:46:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:28.139 true 00:24:28.139 10:46:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.139 10:46:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:24:28.139 10:46:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.139 10:46:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:28.139 true 00:24:28.139 10:46:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.139 10:46:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:24:30.671 10:46:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:24:30.671 10:46:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.671 10:46:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.671 true 00:24:30.671 10:46:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.671 10:46:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:24:30.671 10:46:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.671 10:46:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.671 true 00:24:30.671 10:46:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.671 10:46:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:24:30.671 10:46:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.671 10:46:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.671 true 00:24:30.671 10:46:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.671 10:46:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:24:30.671 10:46:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.671 10:46:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.671 true 00:24:30.671 10:46:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.671 10:46:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:24:30.671 10:46:38 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2302501 00:25:26.895 00:25:26.895 job0: (groupid=0, jobs=1): err= 0: pid=2302620: Wed Jul 24 10:47:32 2024 00:25:26.895 read: IOPS=1288, BW=5154KiB/s (5278kB/s)(302MiB/60000msec) 00:25:26.895 slat (usec): min=4, max=12594, avg= 7.19, stdev=62.09 00:25:26.895 clat (usec): min=24, max=42699k, avg=654.33, stdev=153565.63 00:25:26.895 lat (usec): min=90, max=42699k, avg=661.52, stdev=153565.64 00:25:26.895 clat percentiles (usec): 00:25:26.895 | 1.00th=[ 90], 5.00th=[ 93], 10.00th=[ 94], 20.00th=[ 96], 00:25:26.895 | 30.00th=[ 98], 40.00th=[ 100], 50.00th=[ 102], 60.00th=[ 103], 00:25:26.895 | 70.00th=[ 105], 80.00th=[ 108], 90.00th=[ 111], 95.00th=[ 114], 00:25:26.895 | 99.00th=[ 120], 99.50th=[ 123], 99.90th=[ 135], 99.95th=[ 176], 00:25:26.895 | 99.99th=[ 277] 00:25:26.895 write: IOPS=1296, BW=5185KiB/s (5310kB/s)(304MiB/60000msec); 0 zone resets 00:25:26.895 slat (usec): min=2, max=191, avg= 9.28, stdev= 1.61 00:25:26.895 clat (usec): min=75, max=476, avg=99.73, stdev= 7.62 00:25:26.895 lat (usec): min=88, max=509, avg=109.01, stdev= 7.91 00:25:26.895 clat percentiles (usec): 00:25:26.895 | 1.00th=[ 88], 5.00th=[ 91], 10.00th=[ 92], 20.00th=[ 94], 00:25:26.895 | 30.00th=[ 96], 40.00th=[ 98], 50.00th=[ 99], 60.00th=[ 101], 00:25:26.895 | 70.00th=[ 103], 80.00th=[ 105], 90.00th=[ 109], 95.00th=[ 112], 00:25:26.895 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 135], 99.95th=[ 174], 00:25:26.895 | 99.99th=[ 306] 00:25:26.895 bw ( KiB/s): min= 4096, max=20480, per=100.00%, avg=17320.23, stdev=3086.60, samples=35 00:25:26.895 iops : min= 1024, max= 5120, avg=4330.06, stdev=771.65, samples=35 00:25:26.895 lat (usec) : 50=0.01%, 100=48.40%, 250=51.58%, 500=0.02% 00:25:26.895 lat (msec) : >=2000=0.01% 00:25:26.895 cpu : usr=1.44%, sys=2.56%, ctx=155100, majf=0, minf=107 00:25:26.895 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:26.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.895 issued rwts: total=77312,77776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:26.895 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:26.895 00:25:26.895 Run status group 0 (all jobs): 00:25:26.895 READ: bw=5154KiB/s (5278kB/s), 5154KiB/s-5154KiB/s (5278kB/s-5278kB/s), io=302MiB (317MB), run=60000-60000msec 00:25:26.895 WRITE: bw=5185KiB/s (5310kB/s), 5185KiB/s-5185KiB/s (5310kB/s-5310kB/s), io=304MiB (319MB), run=60000-60000msec 00:25:26.895 00:25:26.895 Disk stats (read/write): 00:25:26.895 nvme0n1: ios=77217/77312, merge=0/0, ticks=7407/7305, in_queue=14712, util=99.61% 00:25:26.895 10:47:32 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:26.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:25:26.895 nvmf hotplug test: fio successful as expected 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:26.895 rmmod nvme_rdma 00:25:26.895 rmmod nvme_fabrics 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 2301925 ']' 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 2301925 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 2301925 ']' 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 2301925 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2301925 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2301925' 00:25:26.895 killing process with pid 2301925 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 2301925 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 2301925 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:26.895 00:25:26.895 real 1m10.481s 00:25:26.895 user 4m27.283s 00:25:26.895 sys 0m5.950s 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:26.895 ************************************ 00:25:26.895 END TEST nvmf_initiator_timeout 00:25:26.895 ************************************ 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' rdma = tcp ']' 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # [[ rdma == \r\d\m\a ]] 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@61 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:26.895 10:47:33 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:26.896 ************************************ 00:25:26.896 START TEST nvmf_srq_overwhelm 00:25:26.896 ************************************ 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:25:26.896 * Looking for test storage... 00:25:26.896 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # : 0 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@285 -- # xtrace_disable 00:25:26.896 10:47:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # pci_devs=() 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # net_devs=() 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # e810=() 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # local -ga e810 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # x722=() 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # local -ga x722 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # mlx=() 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # local -ga mlx 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:25:32.168 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:32.168 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:25:32.169 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:25:32.169 Found net devices under 0000:da:00.0: mlx_0_0 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:25:32.169 Found net devices under 0000:da:00.1: mlx_0_1 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # is_hw=yes 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # rdma_device_init 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # uname 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:32.169 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:32.169 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:25:32.169 altname enp218s0f0np0 00:25:32.169 altname ens818f0np0 00:25:32.169 inet 192.168.100.8/24 scope global mlx_0_0 00:25:32.169 valid_lft forever preferred_lft forever 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:32.169 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:32.169 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:25:32.169 altname enp218s0f1np1 00:25:32.169 altname ens818f1np1 00:25:32.169 inet 192.168.100.9/24 scope global mlx_0_1 00:25:32.169 valid_lft forever preferred_lft forever 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # return 0 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:32.169 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:32.170 192.168.100.9' 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # head -n 1 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:32.170 192.168.100.9' 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:32.170 192.168.100.9' 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # tail -n +2 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # head -n 1 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # nvmfpid=2315392 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # waitforlisten 2315392 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@831 -- # '[' -z 2315392 ']' 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:32.170 10:47:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:32.170 [2024-07-24 10:47:39.021194] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:25:32.170 [2024-07-24 10:47:39.021246] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.170 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.170 [2024-07-24 10:47:39.077410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:32.170 [2024-07-24 10:47:39.120543] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.170 [2024-07-24 10:47:39.120575] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.170 [2024-07-24 10:47:39.120586] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:32.170 [2024-07-24 10:47:39.120592] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:32.170 [2024-07-24 10:47:39.120597] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.170 [2024-07-24 10:47:39.120642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.170 [2024-07-24 10:47:39.120738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:32.170 [2024-07-24 10:47:39.120755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:32.170 [2024-07-24 10:47:39.120756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # return 0 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:32.170 [2024-07-24 10:47:39.294995] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd566a0/0xd5ab70) succeed. 00:25:32.170 [2024-07-24 10:47:39.304187] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd57c90/0xd9c200) succeed. 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:32.170 Malloc0 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:32.170 [2024-07-24 10:47:39.398799] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.170 10:47:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:33.106 Malloc1 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.106 10:47:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme1n1 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme1n1 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:34.040 Malloc2 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.040 10:47:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:25:34.975 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:25:34.975 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:25:34.975 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:25:34.975 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme2n1 00:25:35.233 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:25:35.234 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme2n1 00:25:35.234 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:25:35.234 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:35.234 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:25:35.234 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.234 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:35.234 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.234 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:35.234 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.234 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:35.234 Malloc3 00:25:35.234 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.234 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:35.234 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.234 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:35.234 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.234 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:25:35.234 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.234 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:35.234 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.234 10:47:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:25:36.229 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:25:36.229 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:25:36.229 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:25:36.229 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme3n1 00:25:36.229 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:25:36.229 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme3n1 00:25:36.229 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:25:36.229 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:36.229 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:25:36.229 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.229 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:36.229 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.229 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:36.229 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.229 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:36.229 Malloc4 00:25:36.229 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.229 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:36.230 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.230 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:36.230 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.230 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:25:36.230 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.230 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:36.230 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.230 10:47:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme4n1 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme4n1 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:37.164 Malloc5 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.164 10:47:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:25:38.099 10:47:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:25:38.099 10:47:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:25:38.099 10:47:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme5n1 00:25:38.099 10:47:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:25:38.099 10:47:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme5n1 00:25:38.099 10:47:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:25:38.099 10:47:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:25:38.099 10:47:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:25:38.357 [global] 00:25:38.357 thread=1 00:25:38.357 invalidate=1 00:25:38.357 rw=read 00:25:38.357 time_based=1 00:25:38.357 runtime=10 00:25:38.357 ioengine=libaio 00:25:38.357 direct=1 00:25:38.357 bs=1048576 00:25:38.357 iodepth=128 00:25:38.357 norandommap=1 00:25:38.357 numjobs=13 00:25:38.357 00:25:38.357 [job0] 00:25:38.357 filename=/dev/nvme0n1 00:25:38.357 [job1] 00:25:38.357 filename=/dev/nvme1n1 00:25:38.357 [job2] 00:25:38.357 filename=/dev/nvme2n1 00:25:38.357 [job3] 00:25:38.357 filename=/dev/nvme3n1 00:25:38.357 [job4] 00:25:38.357 filename=/dev/nvme4n1 00:25:38.357 [job5] 00:25:38.357 filename=/dev/nvme5n1 00:25:38.357 Could not set queue depth (nvme0n1) 00:25:38.357 Could not set queue depth (nvme1n1) 00:25:38.357 Could not set queue depth (nvme2n1) 00:25:38.357 Could not set queue depth (nvme3n1) 00:25:38.357 Could not set queue depth (nvme4n1) 00:25:38.357 Could not set queue depth (nvme5n1) 00:25:38.615 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:38.615 ... 00:25:38.615 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:38.615 ... 00:25:38.615 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:38.615 ... 00:25:38.615 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:38.615 ... 00:25:38.615 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:38.615 ... 00:25:38.615 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:38.615 ... 00:25:38.615 fio-3.35 00:25:38.615 Starting 78 threads 00:25:53.503 00:25:53.503 job0: (groupid=0, jobs=1): err= 0: pid=2316793: Wed Jul 24 10:47:59 2024 00:25:53.503 read: IOPS=45, BW=45.3MiB/s (47.5MB/s)(580MiB/12798msec) 00:25:53.503 slat (usec): min=39, max=1678.5k, avg=18473.91, stdev=72693.47 00:25:53.503 clat (msec): min=719, max=4535, avg=2670.42, stdev=1075.90 00:25:53.503 lat (msec): min=721, max=4536, avg=2688.90, stdev=1074.80 00:25:53.503 clat percentiles (msec): 00:25:53.503 | 1.00th=[ 726], 5.00th=[ 885], 10.00th=[ 1586], 20.00th=[ 1770], 00:25:53.504 | 30.00th=[ 1888], 40.00th=[ 1938], 50.00th=[ 2400], 60.00th=[ 3104], 00:25:53.504 | 70.00th=[ 3540], 80.00th=[ 3876], 90.00th=[ 4144], 95.00th=[ 4329], 00:25:53.504 | 99.00th=[ 4463], 99.50th=[ 4463], 99.90th=[ 4530], 99.95th=[ 4530], 00:25:53.504 | 99.99th=[ 4530] 00:25:53.504 bw ( KiB/s): min= 2052, max=184320, per=1.82%, avg=54563.24, stdev=48429.31, samples=17 00:25:53.504 iops : min= 2, max= 180, avg=53.12, stdev=47.39, samples=17 00:25:53.504 lat (msec) : 750=2.07%, 1000=3.45%, 2000=40.00%, >=2000=54.48% 00:25:53.504 cpu : usr=0.03%, sys=1.01%, ctx=1755, majf=0, minf=32769 00:25:53.504 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.5%, >=64=89.1% 00:25:53.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.504 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:53.504 issued rwts: total=580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.504 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.504 job0: (groupid=0, jobs=1): err= 0: pid=2316794: Wed Jul 24 10:47:59 2024 00:25:53.504 read: IOPS=35, BW=35.8MiB/s (37.6MB/s)(458MiB/12786msec) 00:25:53.504 slat (usec): min=108, max=2093.1k, avg=23353.85, stdev=132918.75 00:25:53.504 clat (msec): min=1568, max=5949, avg=2776.44, stdev=1369.86 00:25:53.504 lat (msec): min=1580, max=5960, avg=2799.79, stdev=1371.24 00:25:53.504 clat percentiles (msec): 00:25:53.504 | 1.00th=[ 1586], 5.00th=[ 1670], 10.00th=[ 1720], 20.00th=[ 1770], 00:25:53.504 | 30.00th=[ 1838], 40.00th=[ 1905], 50.00th=[ 2106], 60.00th=[ 2198], 00:25:53.504 | 70.00th=[ 2265], 80.00th=[ 4463], 90.00th=[ 5134], 95.00th=[ 5470], 00:25:53.504 | 99.00th=[ 5805], 99.50th=[ 5873], 99.90th=[ 5940], 99.95th=[ 5940], 00:25:53.504 | 99.99th=[ 5940] 00:25:53.504 bw ( KiB/s): min= 2052, max=91976, per=1.88%, avg=56470.83, stdev=25982.25, samples=12 00:25:53.504 iops : min= 2, max= 89, avg=55.00, stdev=25.37, samples=12 00:25:53.504 lat (msec) : 2000=45.41%, >=2000=54.59% 00:25:53.504 cpu : usr=0.00%, sys=0.80%, ctx=1489, majf=0, minf=32769 00:25:53.504 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=7.0%, >=64=86.2% 00:25:53.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.504 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:53.504 issued rwts: total=458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.504 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.504 job0: (groupid=0, jobs=1): err= 0: pid=2316795: Wed Jul 24 10:47:59 2024 00:25:53.504 read: IOPS=97, BW=97.8MiB/s (103MB/s)(1251MiB/12786msec) 00:25:53.504 slat (usec): min=42, max=2122.0k, avg=8547.01, stdev=103003.88 00:25:53.504 clat (msec): min=132, max=9164, avg=1260.70, stdev=2498.24 00:25:53.504 lat (msec): min=133, max=9165, avg=1269.25, stdev=2507.32 00:25:53.504 clat percentiles (msec): 00:25:53.504 | 1.00th=[ 146], 5.00th=[ 197], 10.00th=[ 253], 20.00th=[ 257], 00:25:53.504 | 30.00th=[ 259], 40.00th=[ 268], 50.00th=[ 321], 60.00th=[ 422], 00:25:53.504 | 70.00th=[ 709], 80.00th=[ 735], 90.00th=[ 5000], 95.00th=[ 8792], 00:25:53.504 | 99.00th=[ 9060], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:25:53.504 | 99.99th=[ 9194] 00:25:53.504 bw ( KiB/s): min= 2052, max=578427, per=6.96%, avg=209129.64, stdev=194419.26, samples=11 00:25:53.504 iops : min= 2, max= 564, avg=204.00, stdev=189.81, samples=11 00:25:53.504 lat (msec) : 250=9.83%, 500=53.16%, 750=22.62%, 1000=3.28%, >=2000=11.11% 00:25:53.504 cpu : usr=0.07%, sys=1.41%, ctx=1154, majf=0, minf=32769 00:25:53.504 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.6%, >=64=95.0% 00:25:53.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.504 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.504 issued rwts: total=1251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.504 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.504 job0: (groupid=0, jobs=1): err= 0: pid=2316796: Wed Jul 24 10:47:59 2024 00:25:53.504 read: IOPS=32, BW=32.8MiB/s (34.4MB/s)(418MiB/12752msec) 00:25:53.504 slat (usec): min=453, max=2173.0k, avg=25522.75, stdev=142688.41 00:25:53.504 clat (msec): min=1687, max=6023, avg=2906.21, stdev=1577.53 00:25:53.504 lat (msec): min=1695, max=6044, avg=2931.73, stdev=1582.94 00:25:53.504 clat percentiles (msec): 00:25:53.504 | 1.00th=[ 1687], 5.00th=[ 1720], 10.00th=[ 1737], 20.00th=[ 1770], 00:25:53.504 | 30.00th=[ 1838], 40.00th=[ 1871], 50.00th=[ 1938], 60.00th=[ 2005], 00:25:53.504 | 70.00th=[ 4279], 80.00th=[ 4799], 90.00th=[ 5604], 95.00th=[ 5940], 00:25:53.504 | 99.00th=[ 6007], 99.50th=[ 6007], 99.90th=[ 6007], 99.95th=[ 6007], 00:25:53.504 | 99.99th=[ 6007] 00:25:53.504 bw ( KiB/s): min= 1424, max=106496, per=1.98%, avg=59540.50, stdev=34475.64, samples=10 00:25:53.504 iops : min= 1, max= 104, avg=58.00, stdev=33.75, samples=10 00:25:53.504 lat (msec) : 2000=59.81%, >=2000=40.19% 00:25:53.504 cpu : usr=0.02%, sys=0.85%, ctx=1525, majf=0, minf=32769 00:25:53.504 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.7%, >=64=84.9% 00:25:53.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.504 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:53.504 issued rwts: total=418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.504 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.504 job0: (groupid=0, jobs=1): err= 0: pid=2316797: Wed Jul 24 10:47:59 2024 00:25:53.504 read: IOPS=50, BW=50.3MiB/s (52.7MB/s)(646MiB/12853msec) 00:25:53.504 slat (usec): min=57, max=2088.8k, avg=16666.26, stdev=111782.71 00:25:53.504 clat (msec): min=645, max=6360, avg=2453.03, stdev=1892.85 00:25:53.504 lat (msec): min=651, max=8282, avg=2469.70, stdev=1905.69 00:25:53.504 clat percentiles (msec): 00:25:53.504 | 1.00th=[ 651], 5.00th=[ 659], 10.00th=[ 667], 20.00th=[ 718], 00:25:53.504 | 30.00th=[ 776], 40.00th=[ 902], 50.00th=[ 1620], 60.00th=[ 2299], 00:25:53.504 | 70.00th=[ 4329], 80.00th=[ 4597], 90.00th=[ 5269], 95.00th=[ 5537], 00:25:53.504 | 99.00th=[ 5805], 99.50th=[ 5873], 99.90th=[ 6342], 99.95th=[ 6342], 00:25:53.504 | 99.99th=[ 6342] 00:25:53.504 bw ( KiB/s): min= 1957, max=190464, per=2.21%, avg=66421.19, stdev=61193.13, samples=16 00:25:53.504 iops : min= 1, max= 186, avg=64.75, stdev=59.85, samples=16 00:25:53.504 lat (msec) : 750=26.47%, 1000=19.35%, 2000=11.30%, >=2000=42.88% 00:25:53.504 cpu : usr=0.05%, sys=1.27%, ctx=1417, majf=0, minf=32769 00:25:53.504 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:25:53.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.504 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:53.504 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.504 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.504 job0: (groupid=0, jobs=1): err= 0: pid=2316798: Wed Jul 24 10:47:59 2024 00:25:53.504 read: IOPS=73, BW=73.6MiB/s (77.1MB/s)(789MiB/10725msec) 00:25:53.504 slat (usec): min=44, max=1891.6k, avg=12671.39, stdev=68928.22 00:25:53.504 clat (msec): min=516, max=4760, avg=1230.43, stdev=759.41 00:25:53.504 lat (msec): min=522, max=4855, avg=1243.10, stdev=772.70 00:25:53.504 clat percentiles (msec): 00:25:53.504 | 1.00th=[ 523], 5.00th=[ 531], 10.00th=[ 550], 20.00th=[ 634], 00:25:53.504 | 30.00th=[ 768], 40.00th=[ 911], 50.00th=[ 1036], 60.00th=[ 1083], 00:25:53.504 | 70.00th=[ 1200], 80.00th=[ 1703], 90.00th=[ 2534], 95.00th=[ 2869], 00:25:53.504 | 99.00th=[ 4597], 99.50th=[ 4732], 99.90th=[ 4732], 99.95th=[ 4732], 00:25:53.504 | 99.99th=[ 4732] 00:25:53.504 bw ( KiB/s): min=26570, max=244780, per=4.10%, avg=123115.18, stdev=71624.14, samples=11 00:25:53.505 iops : min= 25, max= 239, avg=120.00, stdev=70.09, samples=11 00:25:53.505 lat (msec) : 750=28.64%, 1000=18.38%, 2000=38.91%, >=2000=14.07% 00:25:53.505 cpu : usr=0.05%, sys=1.27%, ctx=1588, majf=0, minf=32769 00:25:53.505 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=92.0% 00:25:53.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.505 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:53.505 issued rwts: total=789,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.505 job0: (groupid=0, jobs=1): err= 0: pid=2316799: Wed Jul 24 10:47:59 2024 00:25:53.505 read: IOPS=53, BW=53.1MiB/s (55.7MB/s)(677MiB/12742msec) 00:25:53.505 slat (usec): min=59, max=2180.2k, avg=15743.37, stdev=84597.17 00:25:53.505 clat (msec): min=594, max=6712, avg=2291.68, stdev=1714.83 00:25:53.505 lat (msec): min=619, max=6732, avg=2307.42, stdev=1721.26 00:25:53.505 clat percentiles (msec): 00:25:53.505 | 1.00th=[ 625], 5.00th=[ 634], 10.00th=[ 642], 20.00th=[ 701], 00:25:53.505 | 30.00th=[ 810], 40.00th=[ 1838], 50.00th=[ 2123], 60.00th=[ 2299], 00:25:53.505 | 70.00th=[ 2467], 80.00th=[ 2735], 90.00th=[ 5336], 95.00th=[ 6074], 00:25:53.505 | 99.00th=[ 6611], 99.50th=[ 6678], 99.90th=[ 6745], 99.95th=[ 6745], 00:25:53.505 | 99.99th=[ 6745] 00:25:53.505 bw ( KiB/s): min= 2048, max=196608, per=2.20%, avg=66247.00, stdev=48888.30, samples=17 00:25:53.505 iops : min= 2, max= 192, avg=64.59, stdev=47.78, samples=17 00:25:53.505 lat (msec) : 750=27.33%, 1000=6.65%, 2000=12.41%, >=2000=53.62% 00:25:53.505 cpu : usr=0.01%, sys=0.90%, ctx=1703, majf=0, minf=32769 00:25:53.505 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:25:53.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.505 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:53.505 issued rwts: total=677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.505 job0: (groupid=0, jobs=1): err= 0: pid=2316800: Wed Jul 24 10:47:59 2024 00:25:53.505 read: IOPS=30, BW=30.5MiB/s (32.0MB/s)(327MiB/10730msec) 00:25:53.505 slat (usec): min=611, max=2126.6k, avg=32645.50, stdev=158658.76 00:25:53.505 clat (msec): min=52, max=5959, avg=3155.38, stdev=1185.51 00:25:53.505 lat (msec): min=1280, max=6011, avg=3188.03, stdev=1177.69 00:25:53.505 clat percentiles (msec): 00:25:53.505 | 1.00th=[ 1284], 5.00th=[ 1351], 10.00th=[ 1502], 20.00th=[ 1905], 00:25:53.505 | 30.00th=[ 2400], 40.00th=[ 2802], 50.00th=[ 3272], 60.00th=[ 3608], 00:25:53.505 | 70.00th=[ 3775], 80.00th=[ 4077], 90.00th=[ 4665], 95.00th=[ 5336], 00:25:53.505 | 99.00th=[ 5940], 99.50th=[ 5940], 99.90th=[ 5940], 99.95th=[ 5940], 00:25:53.505 | 99.99th=[ 5940] 00:25:53.505 bw ( KiB/s): min= 4087, max=86016, per=1.23%, avg=37033.64, stdev=21226.96, samples=11 00:25:53.505 iops : min= 3, max= 84, avg=35.91, stdev=20.84, samples=11 00:25:53.505 lat (msec) : 100=0.31%, 2000=20.49%, >=2000=79.20% 00:25:53.505 cpu : usr=0.03%, sys=1.00%, ctx=1111, majf=0, minf=32769 00:25:53.505 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.9%, 32=9.8%, >=64=80.7% 00:25:53.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.505 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:25:53.505 issued rwts: total=327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.505 job0: (groupid=0, jobs=1): err= 0: pid=2316801: Wed Jul 24 10:47:59 2024 00:25:53.505 read: IOPS=6, BW=6956KiB/s (7123kB/s)(73.0MiB/10746msec) 00:25:53.505 slat (usec): min=728, max=2068.4k, avg=146218.82, stdev=515808.44 00:25:53.505 clat (msec): min=71, max=10743, avg=7252.44, stdev=3451.07 00:25:53.505 lat (msec): min=2118, max=10745, avg=7398.66, stdev=3367.72 00:25:53.505 clat percentiles (msec): 00:25:53.505 | 1.00th=[ 72], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4279], 00:25:53.505 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 8658], 60.00th=[ 8658], 00:25:53.505 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10805], 00:25:53.505 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:25:53.505 | 99.99th=[10805] 00:25:53.505 lat (msec) : 100=1.37%, >=2000=98.63% 00:25:53.505 cpu : usr=0.00%, sys=0.53%, ctx=83, majf=0, minf=18689 00:25:53.505 IO depths : 1=1.4%, 2=2.7%, 4=5.5%, 8=11.0%, 16=21.9%, 32=43.8%, >=64=13.7% 00:25:53.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.505 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:53.505 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.505 job0: (groupid=0, jobs=1): err= 0: pid=2316802: Wed Jul 24 10:47:59 2024 00:25:53.505 read: IOPS=59, BW=59.8MiB/s (62.7MB/s)(765MiB/12784msec) 00:25:53.505 slat (usec): min=75, max=2157.6k, avg=13977.23, stdev=100697.38 00:25:53.505 clat (msec): min=204, max=6150, avg=1768.22, stdev=1613.74 00:25:53.505 lat (msec): min=206, max=6168, avg=1782.20, stdev=1619.69 00:25:53.505 clat percentiles (msec): 00:25:53.505 | 1.00th=[ 207], 5.00th=[ 239], 10.00th=[ 409], 20.00th=[ 726], 00:25:53.505 | 30.00th=[ 961], 40.00th=[ 1011], 50.00th=[ 1133], 60.00th=[ 1318], 00:25:53.505 | 70.00th=[ 1536], 80.00th=[ 1938], 90.00th=[ 4866], 95.00th=[ 5671], 00:25:53.505 | 99.00th=[ 6007], 99.50th=[ 6141], 99.90th=[ 6141], 99.95th=[ 6141], 00:25:53.505 | 99.99th=[ 6141] 00:25:53.505 bw ( KiB/s): min= 2048, max=288768, per=3.11%, avg=93337.79, stdev=80430.24, samples=14 00:25:53.505 iops : min= 2, max= 282, avg=91.00, stdev=78.63, samples=14 00:25:53.505 lat (msec) : 250=5.10%, 500=8.37%, 750=7.32%, 1000=17.39%, 2000=42.09% 00:25:53.505 lat (msec) : >=2000=19.74% 00:25:53.505 cpu : usr=0.04%, sys=0.95%, ctx=2119, majf=0, minf=32769 00:25:53.505 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.2%, >=64=91.8% 00:25:53.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.505 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:53.505 issued rwts: total=765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.505 job0: (groupid=0, jobs=1): err= 0: pid=2316803: Wed Jul 24 10:47:59 2024 00:25:53.505 read: IOPS=100, BW=100MiB/s (105MB/s)(1011MiB/10069msec) 00:25:53.505 slat (usec): min=39, max=2050.0k, avg=9889.13, stdev=90283.22 00:25:53.505 clat (msec): min=65, max=5055, avg=962.47, stdev=1015.75 00:25:53.505 lat (msec): min=69, max=5057, avg=972.36, stdev=1023.65 00:25:53.505 clat percentiles (msec): 00:25:53.505 | 1.00th=[ 117], 5.00th=[ 305], 10.00th=[ 510], 20.00th=[ 550], 00:25:53.505 | 30.00th=[ 592], 40.00th=[ 651], 50.00th=[ 776], 60.00th=[ 827], 00:25:53.505 | 70.00th=[ 877], 80.00th=[ 911], 90.00th=[ 978], 95.00th=[ 4866], 00:25:53.505 | 99.00th=[ 5067], 99.50th=[ 5067], 99.90th=[ 5067], 99.95th=[ 5067], 00:25:53.505 | 99.99th=[ 5067] 00:25:53.505 bw ( KiB/s): min=34816, max=227328, per=5.48%, avg=164584.73, stdev=54457.34, samples=11 00:25:53.505 iops : min= 34, max= 222, avg=160.73, stdev=53.18, samples=11 00:25:53.505 lat (msec) : 100=0.20%, 250=3.96%, 500=5.44%, 750=37.69%, 1000=44.61% 00:25:53.505 lat (msec) : 2000=1.48%, >=2000=6.63% 00:25:53.505 cpu : usr=0.02%, sys=1.73%, ctx=922, majf=0, minf=32769 00:25:53.506 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.8% 00:25:53.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.506 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.506 issued rwts: total=1011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.506 job0: (groupid=0, jobs=1): err= 0: pid=2316804: Wed Jul 24 10:47:59 2024 00:25:53.506 read: IOPS=33, BW=33.9MiB/s (35.6MB/s)(435MiB/12821msec) 00:25:53.506 slat (usec): min=39, max=2123.1k, avg=24684.22, stdev=192577.36 00:25:53.506 clat (msec): min=598, max=9076, avg=3178.42, stdev=3570.83 00:25:53.506 lat (msec): min=599, max=9078, avg=3203.11, stdev=3577.36 00:25:53.506 clat percentiles (msec): 00:25:53.506 | 1.00th=[ 600], 5.00th=[ 600], 10.00th=[ 600], 20.00th=[ 609], 00:25:53.506 | 30.00th=[ 625], 40.00th=[ 659], 50.00th=[ 693], 60.00th=[ 844], 00:25:53.506 | 70.00th=[ 4178], 80.00th=[ 8658], 90.00th=[ 8792], 95.00th=[ 9060], 00:25:53.506 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:25:53.506 | 99.99th=[ 9060] 00:25:53.506 bw ( KiB/s): min= 1957, max=219136, per=2.62%, avg=78836.62, stdev=91855.01, samples=8 00:25:53.506 iops : min= 1, max= 214, avg=76.88, stdev=89.81, samples=8 00:25:53.506 lat (msec) : 750=56.09%, 1000=5.75%, >=2000=38.16% 00:25:53.506 cpu : usr=0.02%, sys=0.82%, ctx=452, majf=0, minf=32769 00:25:53.506 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.7%, 32=7.4%, >=64=85.5% 00:25:53.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.506 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:53.506 issued rwts: total=435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.506 job0: (groupid=0, jobs=1): err= 0: pid=2316805: Wed Jul 24 10:47:59 2024 00:25:53.506 read: IOPS=63, BW=63.7MiB/s (66.8MB/s)(811MiB/12730msec) 00:25:53.506 slat (usec): min=103, max=2170.1k, avg=13118.73, stdev=97966.15 00:25:53.506 clat (msec): min=228, max=5798, avg=1694.78, stdev=1615.75 00:25:53.506 lat (msec): min=229, max=5803, avg=1707.90, stdev=1621.38 00:25:53.506 clat percentiles (msec): 00:25:53.506 | 1.00th=[ 232], 5.00th=[ 288], 10.00th=[ 380], 20.00th=[ 542], 00:25:53.506 | 30.00th=[ 575], 40.00th=[ 592], 50.00th=[ 701], 60.00th=[ 1569], 00:25:53.506 | 70.00th=[ 2265], 80.00th=[ 2601], 90.00th=[ 4866], 95.00th=[ 5201], 00:25:53.506 | 99.00th=[ 5604], 99.50th=[ 5738], 99.90th=[ 5805], 99.95th=[ 5805], 00:25:53.506 | 99.99th=[ 5805] 00:25:53.506 bw ( KiB/s): min= 2048, max=311296, per=3.58%, avg=107746.85, stdev=91889.94, samples=13 00:25:53.506 iops : min= 2, max= 304, avg=105.15, stdev=89.77, samples=13 00:25:53.506 lat (msec) : 250=2.47%, 500=14.80%, 750=35.14%, 1000=3.21%, 2000=10.23% 00:25:53.506 lat (msec) : >=2000=34.16% 00:25:53.506 cpu : usr=0.05%, sys=0.97%, ctx=1940, majf=0, minf=32769 00:25:53.506 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.2% 00:25:53.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.506 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.506 issued rwts: total=811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.506 job1: (groupid=0, jobs=1): err= 0: pid=2316806: Wed Jul 24 10:47:59 2024 00:25:53.506 read: IOPS=4, BW=4263KiB/s (4365kB/s)(53.0MiB/12731msec) 00:25:53.506 slat (usec): min=512, max=2089.4k, avg=200486.47, stdev=605775.31 00:25:53.506 clat (msec): min=2104, max=12692, avg=8521.91, stdev=2669.68 00:25:53.506 lat (msec): min=4168, max=12729, avg=8722.40, stdev=2575.78 00:25:53.506 clat percentiles (msec): 00:25:53.506 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:25:53.506 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[10537], 60.00th=[10537], 00:25:53.506 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:25:53.506 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:25:53.506 | 99.99th=[12684] 00:25:53.506 lat (msec) : >=2000=100.00% 00:25:53.506 cpu : usr=0.00%, sys=0.28%, ctx=60, majf=0, minf=13569 00:25:53.506 IO depths : 1=1.9%, 2=3.8%, 4=7.5%, 8=15.1%, 16=30.2%, 32=41.5%, >=64=0.0% 00:25:53.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.506 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:53.506 issued rwts: total=53,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.506 job1: (groupid=0, jobs=1): err= 0: pid=2316807: Wed Jul 24 10:47:59 2024 00:25:53.506 read: IOPS=2, BW=2277KiB/s (2331kB/s)(24.0MiB/10794msec) 00:25:53.506 slat (usec): min=947, max=2102.3k, avg=446825.86, stdev=852505.55 00:25:53.506 clat (msec): min=69, max=10792, avg=8250.53, stdev=3670.08 00:25:53.506 lat (msec): min=2126, max=10793, avg=8697.35, stdev=3260.68 00:25:53.506 clat percentiles (msec): 00:25:53.506 | 1.00th=[ 69], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 4279], 00:25:53.506 | 30.00th=[ 6477], 40.00th=[10537], 50.00th=[10671], 60.00th=[10805], 00:25:53.506 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:25:53.506 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:25:53.506 | 99.99th=[10805] 00:25:53.506 lat (msec) : 100=4.17%, >=2000=95.83% 00:25:53.506 cpu : usr=0.00%, sys=0.21%, ctx=94, majf=0, minf=6145 00:25:53.506 IO depths : 1=4.2%, 2=8.3%, 4=16.7%, 8=33.3%, 16=37.5%, 32=0.0%, >=64=0.0% 00:25:53.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.506 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:25:53.506 issued rwts: total=24,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.506 job1: (groupid=0, jobs=1): err= 0: pid=2316808: Wed Jul 24 10:47:59 2024 00:25:53.506 read: IOPS=99, BW=99.2MiB/s (104MB/s)(1067MiB/10753msec) 00:25:53.506 slat (usec): min=42, max=2057.4k, avg=10042.42, stdev=107008.31 00:25:53.506 clat (msec): min=31, max=4738, avg=995.11, stdev=1095.84 00:25:53.506 lat (msec): min=383, max=4743, avg=1005.15, stdev=1101.81 00:25:53.506 clat percentiles (msec): 00:25:53.506 | 1.00th=[ 384], 5.00th=[ 388], 10.00th=[ 388], 20.00th=[ 393], 00:25:53.506 | 30.00th=[ 397], 40.00th=[ 405], 50.00th=[ 592], 60.00th=[ 667], 00:25:53.506 | 70.00th=[ 726], 80.00th=[ 810], 90.00th=[ 2601], 95.00th=[ 4597], 00:25:53.506 | 99.00th=[ 4732], 99.50th=[ 4732], 99.90th=[ 4732], 99.95th=[ 4732], 00:25:53.506 | 99.99th=[ 4732] 00:25:53.506 bw ( KiB/s): min=40960, max=331776, per=7.11%, avg=213628.78, stdev=93092.19, samples=9 00:25:53.506 iops : min= 40, max= 324, avg=208.56, stdev=90.92, samples=9 00:25:53.506 lat (msec) : 50=0.09%, 500=45.45%, 750=27.84%, 1000=8.53%, >=2000=18.09% 00:25:53.506 cpu : usr=0.04%, sys=1.74%, ctx=983, majf=0, minf=32769 00:25:53.506 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.1% 00:25:53.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.506 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.506 issued rwts: total=1067,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.507 job1: (groupid=0, jobs=1): err= 0: pid=2316809: Wed Jul 24 10:47:59 2024 00:25:53.507 read: IOPS=18, BW=18.6MiB/s (19.5MB/s)(201MiB/10803msec) 00:25:53.507 slat (usec): min=87, max=2152.1k, avg=53397.19, stdev=296126.36 00:25:53.507 clat (msec): min=68, max=10676, avg=6592.93, stdev=4144.63 00:25:53.507 lat (msec): min=778, max=10701, avg=6646.32, stdev=4124.90 00:25:53.507 clat percentiles (msec): 00:25:53.507 | 1.00th=[ 776], 5.00th=[ 793], 10.00th=[ 844], 20.00th=[ 894], 00:25:53.507 | 30.00th=[ 1938], 40.00th=[ 8221], 50.00th=[ 9731], 60.00th=[ 9866], 00:25:53.507 | 70.00th=[10000], 80.00th=[10134], 90.00th=[10268], 95.00th=[10268], 00:25:53.507 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10671], 99.95th=[10671], 00:25:53.507 | 99.99th=[10671] 00:25:53.507 bw ( KiB/s): min= 2048, max=69632, per=0.71%, avg=21357.71, stdev=24713.77, samples=7 00:25:53.507 iops : min= 2, max= 68, avg=20.86, stdev=24.13, samples=7 00:25:53.507 lat (msec) : 100=0.50%, 1000=25.87%, 2000=5.47%, >=2000=68.16% 00:25:53.507 cpu : usr=0.00%, sys=0.90%, ctx=250, majf=0, minf=32769 00:25:53.507 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=15.9%, >=64=68.7% 00:25:53.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.507 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:25:53.507 issued rwts: total=201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.507 job1: (groupid=0, jobs=1): err= 0: pid=2316810: Wed Jul 24 10:47:59 2024 00:25:53.507 read: IOPS=5, BW=5129KiB/s (5252kB/s)(54.0MiB/10781msec) 00:25:53.507 slat (usec): min=548, max=2109.2k, avg=197946.74, stdev=578668.85 00:25:53.507 clat (msec): min=91, max=10778, avg=8704.51, stdev=2564.72 00:25:53.507 lat (msec): min=2151, max=10780, avg=8902.46, stdev=2284.61 00:25:53.507 clat percentiles (msec): 00:25:53.507 | 1.00th=[ 92], 5.00th=[ 2165], 10.00th=[ 4396], 20.00th=[ 8288], 00:25:53.507 | 30.00th=[ 8356], 40.00th=[ 8423], 50.00th=[ 8490], 60.00th=[10537], 00:25:53.507 | 70.00th=[10671], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:25:53.507 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:25:53.507 | 99.99th=[10805] 00:25:53.507 lat (msec) : 100=1.85%, >=2000=98.15% 00:25:53.507 cpu : usr=0.00%, sys=0.32%, ctx=138, majf=0, minf=13825 00:25:53.507 IO depths : 1=1.9%, 2=3.7%, 4=7.4%, 8=14.8%, 16=29.6%, 32=42.6%, >=64=0.0% 00:25:53.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.507 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:53.507 issued rwts: total=54,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.507 job1: (groupid=0, jobs=1): err= 0: pid=2316811: Wed Jul 24 10:47:59 2024 00:25:53.507 read: IOPS=164, BW=164MiB/s (172MB/s)(1646MiB/10033msec) 00:25:53.507 slat (usec): min=34, max=1913.2k, avg=6070.74, stdev=47490.16 00:25:53.507 clat (msec): min=32, max=2969, avg=611.82, stdev=361.51 00:25:53.507 lat (msec): min=34, max=2971, avg=617.89, stdev=366.69 00:25:53.507 clat percentiles (msec): 00:25:53.507 | 1.00th=[ 66], 5.00th=[ 380], 10.00th=[ 393], 20.00th=[ 401], 00:25:53.507 | 30.00th=[ 405], 40.00th=[ 422], 50.00th=[ 558], 60.00th=[ 667], 00:25:53.507 | 70.00th=[ 718], 80.00th=[ 735], 90.00th=[ 869], 95.00th=[ 1045], 00:25:53.507 | 99.00th=[ 2937], 99.50th=[ 2937], 99.90th=[ 2970], 99.95th=[ 2970], 00:25:53.507 | 99.99th=[ 2970] 00:25:53.507 bw ( KiB/s): min=22483, max=327680, per=6.90%, avg=207391.13, stdev=90448.66, samples=15 00:25:53.507 iops : min= 21, max= 320, avg=202.47, stdev=88.47, samples=15 00:25:53.507 lat (msec) : 50=0.55%, 100=0.85%, 250=2.00%, 500=41.19%, 750=39.06% 00:25:53.507 lat (msec) : 1000=9.90%, 2000=4.86%, >=2000=1.58% 00:25:53.507 cpu : usr=0.12%, sys=1.99%, ctx=2291, majf=0, minf=32769 00:25:53.507 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:25:53.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.507 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.507 issued rwts: total=1646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.507 job1: (groupid=0, jobs=1): err= 0: pid=2316812: Wed Jul 24 10:47:59 2024 00:25:53.507 read: IOPS=43, BW=43.8MiB/s (45.9MB/s)(473MiB/10807msec) 00:25:53.507 slat (usec): min=681, max=2169.5k, avg=22651.80, stdev=179338.11 00:25:53.507 clat (msec): min=90, max=8816, avg=1517.86, stdev=1919.16 00:25:53.507 lat (msec): min=394, max=8906, avg=1540.51, stdev=1951.80 00:25:53.507 clat percentiles (msec): 00:25:53.507 | 1.00th=[ 393], 5.00th=[ 397], 10.00th=[ 405], 20.00th=[ 414], 00:25:53.507 | 30.00th=[ 430], 40.00th=[ 439], 50.00th=[ 451], 60.00th=[ 936], 00:25:53.507 | 70.00th=[ 1133], 80.00th=[ 2702], 90.00th=[ 4245], 95.00th=[ 4463], 00:25:53.507 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:25:53.507 | 99.99th=[ 8792] 00:25:53.507 bw ( KiB/s): min=118784, max=305152, per=7.84%, avg=235520.00, stdev=101721.91, samples=3 00:25:53.507 iops : min= 116, max= 298, avg=230.00, stdev=99.34, samples=3 00:25:53.507 lat (msec) : 100=0.21%, 500=52.43%, 1000=10.36%, 2000=16.07%, >=2000=20.93% 00:25:53.507 cpu : usr=0.02%, sys=0.96%, ctx=924, majf=0, minf=32769 00:25:53.507 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.7% 00:25:53.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.507 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:53.507 issued rwts: total=473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.507 job1: (groupid=0, jobs=1): err= 0: pid=2316813: Wed Jul 24 10:47:59 2024 00:25:53.507 read: IOPS=23, BW=23.0MiB/s (24.1MB/s)(247MiB/10733msec) 00:25:53.507 slat (usec): min=51, max=2098.8k, avg=43167.62, stdev=268743.81 00:25:53.507 clat (msec): min=68, max=10283, avg=5382.03, stdev=4290.66 00:25:53.507 lat (msec): min=383, max=10284, avg=5425.20, stdev=4286.81 00:25:53.507 clat percentiles (msec): 00:25:53.507 | 1.00th=[ 384], 5.00th=[ 388], 10.00th=[ 388], 20.00th=[ 393], 00:25:53.507 | 30.00th=[ 481], 40.00th=[ 4044], 50.00th=[ 6074], 60.00th=[ 8658], 00:25:53.507 | 70.00th=[10000], 80.00th=[10134], 90.00th=[10134], 95.00th=[10268], 00:25:53.507 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:25:53.507 | 99.99th=[10268] 00:25:53.507 bw ( KiB/s): min= 4087, max=153600, per=1.16%, avg=34819.29, stdev=53168.19, samples=7 00:25:53.507 iops : min= 3, max= 150, avg=33.86, stdev=52.02, samples=7 00:25:53.507 lat (msec) : 100=0.40%, 500=34.01%, 750=1.21%, 2000=2.02%, >=2000=62.35% 00:25:53.507 cpu : usr=0.01%, sys=0.92%, ctx=289, majf=0, minf=32769 00:25:53.507 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.5%, 32=13.0%, >=64=74.5% 00:25:53.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.507 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:25:53.507 issued rwts: total=247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.508 job1: (groupid=0, jobs=1): err= 0: pid=2316814: Wed Jul 24 10:47:59 2024 00:25:53.508 read: IOPS=8, BW=8259KiB/s (8457kB/s)(87.0MiB/10787msec) 00:25:53.508 slat (usec): min=726, max=2248.3k, avg=122921.63, stdev=458054.17 00:25:53.508 clat (msec): min=92, max=10766, avg=6488.87, stdev=1537.41 00:25:53.508 lat (msec): min=2134, max=10786, avg=6611.80, stdev=1444.78 00:25:53.508 clat percentiles (msec): 00:25:53.508 | 1.00th=[ 93], 5.00th=[ 5805], 10.00th=[ 5873], 20.00th=[ 6007], 00:25:53.508 | 30.00th=[ 6074], 40.00th=[ 6141], 50.00th=[ 6208], 60.00th=[ 6275], 00:25:53.508 | 70.00th=[ 6342], 80.00th=[ 6409], 90.00th=[ 8658], 95.00th=[10671], 00:25:53.508 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:25:53.508 | 99.99th=[10805] 00:25:53.508 lat (msec) : 100=1.15%, >=2000=98.85% 00:25:53.508 cpu : usr=0.00%, sys=0.42%, ctx=266, majf=0, minf=22273 00:25:53.508 IO depths : 1=1.1%, 2=2.3%, 4=4.6%, 8=9.2%, 16=18.4%, 32=36.8%, >=64=27.6% 00:25:53.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.508 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:53.508 issued rwts: total=87,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.508 job1: (groupid=0, jobs=1): err= 0: pid=2316815: Wed Jul 24 10:47:59 2024 00:25:53.508 read: IOPS=12, BW=13.0MiB/s (13.6MB/s)(140MiB/10802msec) 00:25:53.508 slat (usec): min=889, max=2094.3k, avg=76500.78, stdev=359182.75 00:25:53.508 clat (msec): min=91, max=10775, avg=4653.24, stdev=1981.80 00:25:53.508 lat (msec): min=2144, max=10776, avg=4729.74, stdev=2010.71 00:25:53.508 clat percentiles (msec): 00:25:53.508 | 1.00th=[ 2140], 5.00th=[ 3473], 10.00th=[ 3574], 20.00th=[ 3675], 00:25:53.508 | 30.00th=[ 3742], 40.00th=[ 3842], 50.00th=[ 3943], 60.00th=[ 4044], 00:25:53.508 | 70.00th=[ 4144], 80.00th=[ 4279], 90.00th=[ 7349], 95.00th=[10671], 00:25:53.508 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:25:53.508 | 99.99th=[10805] 00:25:53.508 bw ( KiB/s): min= 4096, max=20480, per=0.41%, avg=12288.00, stdev=11585.24, samples=2 00:25:53.508 iops : min= 4, max= 20, avg=12.00, stdev=11.31, samples=2 00:25:53.508 lat (msec) : 100=0.71%, >=2000=99.29% 00:25:53.508 cpu : usr=0.00%, sys=0.69%, ctx=354, majf=0, minf=32769 00:25:53.508 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=5.7%, 16=11.4%, 32=22.9%, >=64=55.0% 00:25:53.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.508 complete : 0=0.0%, 4=92.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=7.1% 00:25:53.508 issued rwts: total=140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.508 job1: (groupid=0, jobs=1): err= 0: pid=2316816: Wed Jul 24 10:47:59 2024 00:25:53.508 read: IOPS=4, BW=4664KiB/s (4776kB/s)(49.0MiB/10758msec) 00:25:53.508 slat (usec): min=651, max=2062.3k, avg=217641.15, stdev=619057.44 00:25:53.508 clat (msec): min=93, max=10754, avg=7494.71, stdev=3263.65 00:25:53.508 lat (msec): min=2128, max=10757, avg=7712.35, stdev=3111.83 00:25:53.508 clat percentiles (msec): 00:25:53.508 | 1.00th=[ 93], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4329], 00:25:53.508 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 8557], 60.00th=[10537], 00:25:53.508 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10805], 00:25:53.508 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:25:53.508 | 99.99th=[10805] 00:25:53.508 lat (msec) : 100=2.04%, >=2000=97.96% 00:25:53.508 cpu : usr=0.00%, sys=0.40%, ctx=81, majf=0, minf=12545 00:25:53.508 IO depths : 1=2.0%, 2=4.1%, 4=8.2%, 8=16.3%, 16=32.7%, 32=36.7%, >=64=0.0% 00:25:53.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.508 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:53.508 issued rwts: total=49,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.508 job1: (groupid=0, jobs=1): err= 0: pid=2316817: Wed Jul 24 10:47:59 2024 00:25:53.508 read: IOPS=2, BW=2491KiB/s (2550kB/s)(26.0MiB/10690msec) 00:25:53.508 slat (msec): min=5, max=2072, avg=407.53, stdev=809.69 00:25:53.508 clat (msec): min=93, max=10632, avg=5374.47, stdev=3140.67 00:25:53.508 lat (msec): min=2128, max=10689, avg=5782.00, stdev=3115.33 00:25:53.508 clat percentiles (msec): 00:25:53.508 | 1.00th=[ 93], 5.00th=[ 2123], 10.00th=[ 2140], 20.00th=[ 2165], 00:25:53.508 | 30.00th=[ 2165], 40.00th=[ 4329], 50.00th=[ 4396], 60.00th=[ 6477], 00:25:53.508 | 70.00th=[ 6544], 80.00th=[ 8658], 90.00th=[10537], 95.00th=[10671], 00:25:53.508 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:25:53.508 | 99.99th=[10671] 00:25:53.508 lat (msec) : 100=3.85%, >=2000=96.15% 00:25:53.508 cpu : usr=0.00%, sys=0.17%, ctx=65, majf=0, minf=6657 00:25:53.508 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:25:53.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.508 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:25:53.508 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.508 job1: (groupid=0, jobs=1): err= 0: pid=2316818: Wed Jul 24 10:47:59 2024 00:25:53.508 read: IOPS=1, BW=1054KiB/s (1080kB/s)(11.0MiB/10682msec) 00:25:53.508 slat (msec): min=15, max=4082, avg=964.74, stdev=1398.09 00:25:53.508 clat (msec): min=69, max=10666, avg=6024.80, stdev=4104.44 00:25:53.508 lat (msec): min=2119, max=10681, avg=6989.54, stdev=3800.51 00:25:53.508 clat percentiles (msec): 00:25:53.508 | 1.00th=[ 69], 5.00th=[ 69], 10.00th=[ 2123], 20.00th=[ 2140], 00:25:53.508 | 30.00th=[ 2198], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 6477], 00:25:53.508 | 70.00th=[10537], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:25:53.508 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:25:53.508 | 99.99th=[10671] 00:25:53.508 lat (msec) : 100=9.09%, >=2000=90.91% 00:25:53.508 cpu : usr=0.00%, sys=0.08%, ctx=71, majf=0, minf=2817 00:25:53.508 IO depths : 1=9.1%, 2=18.2%, 4=36.4%, 8=36.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:53.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.508 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.508 issued rwts: total=11,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.508 job2: (groupid=0, jobs=1): err= 0: pid=2316820: Wed Jul 24 10:47:59 2024 00:25:53.508 read: IOPS=7, BW=8041KiB/s (8234kB/s)(84.0MiB/10697msec) 00:25:53.508 slat (usec): min=690, max=2073.9k, avg=126504.26, stdev=449790.36 00:25:53.508 clat (msec): min=70, max=10694, avg=4467.94, stdev=1820.56 00:25:53.508 lat (msec): min=2103, max=10696, avg=4594.44, stdev=1879.53 00:25:53.508 clat percentiles (msec): 00:25:53.508 | 1.00th=[ 70], 5.00th=[ 3473], 10.00th=[ 3540], 20.00th=[ 3641], 00:25:53.508 | 30.00th=[ 3775], 40.00th=[ 3876], 50.00th=[ 3977], 60.00th=[ 4144], 00:25:53.508 | 70.00th=[ 4245], 80.00th=[ 4396], 90.00th=[ 6477], 95.00th=[ 8658], 00:25:53.508 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:25:53.508 | 99.99th=[10671] 00:25:53.508 lat (msec) : 100=1.19%, >=2000=98.81% 00:25:53.508 cpu : usr=0.01%, sys=0.38%, ctx=192, majf=0, minf=21505 00:25:53.508 IO depths : 1=1.2%, 2=2.4%, 4=4.8%, 8=9.5%, 16=19.0%, 32=38.1%, >=64=25.0% 00:25:53.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.508 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:53.508 issued rwts: total=84,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.508 job2: (groupid=0, jobs=1): err= 0: pid=2316821: Wed Jul 24 10:47:59 2024 00:25:53.508 read: IOPS=3, BW=3434KiB/s (3516kB/s)(36.0MiB/10736msec) 00:25:53.508 slat (usec): min=565, max=2100.9k, avg=295740.46, stdev=716428.92 00:25:53.508 clat (msec): min=88, max=10734, avg=8802.56, stdev=2815.40 00:25:53.508 lat (msec): min=2157, max=10734, avg=9098.30, stdev=2402.81 00:25:53.508 clat percentiles (msec): 00:25:53.508 | 1.00th=[ 89], 5.00th=[ 2165], 10.00th=[ 4329], 20.00th=[ 6477], 00:25:53.508 | 30.00th=[ 6544], 40.00th=[10537], 50.00th=[10537], 60.00th=[10671], 00:25:53.508 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:25:53.508 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:25:53.508 | 99.99th=[10671] 00:25:53.508 lat (msec) : 100=2.78%, >=2000=97.22% 00:25:53.508 cpu : usr=0.00%, sys=0.21%, ctx=88, majf=0, minf=9217 00:25:53.508 IO depths : 1=2.8%, 2=5.6%, 4=11.1%, 8=22.2%, 16=44.4%, 32=13.9%, >=64=0.0% 00:25:53.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.508 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:53.508 issued rwts: total=36,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.508 job2: (groupid=0, jobs=1): err= 0: pid=2316822: Wed Jul 24 10:47:59 2024 00:25:53.508 read: IOPS=3, BW=3891KiB/s (3984kB/s)(41.0MiB/10791msec) 00:25:53.508 slat (usec): min=879, max=2100.9k, avg=261464.59, stdev=676220.39 00:25:53.508 clat (msec): min=69, max=10789, avg=8867.39, stdev=2966.40 00:25:53.508 lat (msec): min=2129, max=10789, avg=9128.86, stdev=2624.30 00:25:53.508 clat percentiles (msec): 00:25:53.508 | 1.00th=[ 70], 5.00th=[ 2165], 10.00th=[ 4329], 20.00th=[ 6477], 00:25:53.508 | 30.00th=[ 8658], 40.00th=[10671], 50.00th=[10671], 60.00th=[10805], 00:25:53.509 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:25:53.509 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:25:53.509 | 99.99th=[10805] 00:25:53.509 lat (msec) : 100=2.44%, >=2000=97.56% 00:25:53.509 cpu : usr=0.00%, sys=0.32%, ctx=93, majf=0, minf=10497 00:25:53.509 IO depths : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0% 00:25:53.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.509 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:53.509 issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.509 job2: (groupid=0, jobs=1): err= 0: pid=2316823: Wed Jul 24 10:47:59 2024 00:25:53.509 read: IOPS=4, BW=4113KiB/s (4212kB/s)(43.0MiB/10705msec) 00:25:53.509 slat (usec): min=463, max=2075.3k, avg=246724.86, stdev=644663.91 00:25:53.509 clat (msec): min=94, max=10701, avg=5466.99, stdev=2552.41 00:25:53.509 lat (msec): min=2118, max=10704, avg=5713.72, stdev=2533.44 00:25:53.509 clat percentiles (msec): 00:25:53.509 | 1.00th=[ 95], 5.00th=[ 2123], 10.00th=[ 2140], 20.00th=[ 2198], 00:25:53.509 | 30.00th=[ 4329], 40.00th=[ 4329], 50.00th=[ 6275], 60.00th=[ 6342], 00:25:53.509 | 70.00th=[ 6409], 80.00th=[ 6477], 90.00th=[ 8658], 95.00th=[10537], 00:25:53.509 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:25:53.509 | 99.99th=[10671] 00:25:53.509 lat (msec) : 100=2.33%, >=2000=97.67% 00:25:53.509 cpu : usr=0.00%, sys=0.22%, ctx=106, majf=0, minf=11009 00:25:53.509 IO depths : 1=2.3%, 2=4.7%, 4=9.3%, 8=18.6%, 16=37.2%, 32=27.9%, >=64=0.0% 00:25:53.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.509 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:53.509 issued rwts: total=43,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.509 job2: (groupid=0, jobs=1): err= 0: pid=2316824: Wed Jul 24 10:47:59 2024 00:25:53.509 read: IOPS=132, BW=132MiB/s (139MB/s)(1412MiB/10686msec) 00:25:53.509 slat (usec): min=41, max=1893.4k, avg=7524.59, stdev=52539.27 00:25:53.509 clat (msec): min=52, max=2777, avg=925.11, stdev=489.50 00:25:53.509 lat (msec): min=599, max=2779, avg=932.63, stdev=490.00 00:25:53.509 clat percentiles (msec): 00:25:53.509 | 1.00th=[ 600], 5.00th=[ 609], 10.00th=[ 625], 20.00th=[ 667], 00:25:53.509 | 30.00th=[ 726], 40.00th=[ 760], 50.00th=[ 785], 60.00th=[ 818], 00:25:53.509 | 70.00th=[ 869], 80.00th=[ 919], 90.00th=[ 995], 95.00th=[ 2433], 00:25:53.509 | 99.00th=[ 2702], 99.50th=[ 2769], 99.90th=[ 2769], 99.95th=[ 2769], 00:25:53.509 | 99.99th=[ 2769] 00:25:53.509 bw ( KiB/s): min=42922, max=223232, per=5.15%, avg=154660.65, stdev=46896.56, samples=17 00:25:53.509 iops : min= 41, max= 218, avg=150.82, stdev=45.93, samples=17 00:25:53.509 lat (msec) : 100=0.07%, 750=35.76%, 1000=54.18%, 2000=1.49%, >=2000=8.50% 00:25:53.509 cpu : usr=0.07%, sys=2.05%, ctx=1243, majf=0, minf=32769 00:25:53.509 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:25:53.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.509 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.509 issued rwts: total=1412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.509 job2: (groupid=0, jobs=1): err= 0: pid=2316825: Wed Jul 24 10:47:59 2024 00:25:53.509 read: IOPS=17, BW=17.2MiB/s (18.0MB/s)(184MiB/10692msec) 00:25:53.509 slat (usec): min=373, max=2096.3k, avg=54346.67, stdev=302054.10 00:25:53.509 clat (msec): min=691, max=7521, avg=2074.02, stdev=2001.30 00:25:53.509 lat (msec): min=691, max=9495, avg=2128.36, stdev=2078.76 00:25:53.509 clat percentiles (msec): 00:25:53.509 | 1.00th=[ 693], 5.00th=[ 776], 10.00th=[ 827], 20.00th=[ 969], 00:25:53.509 | 30.00th=[ 1053], 40.00th=[ 1116], 50.00th=[ 1217], 60.00th=[ 1401], 00:25:53.509 | 70.00th=[ 1536], 80.00th=[ 2165], 90.00th=[ 5470], 95.00th=[ 7483], 00:25:53.509 | 99.00th=[ 7550], 99.50th=[ 7550], 99.90th=[ 7550], 99.95th=[ 7550], 00:25:53.509 | 99.99th=[ 7550] 00:25:53.509 bw ( KiB/s): min= 2048, max=80652, per=1.38%, avg=41350.00, stdev=55581.42, samples=2 00:25:53.509 iops : min= 2, max= 78, avg=40.00, stdev=53.74, samples=2 00:25:53.509 lat (msec) : 750=3.80%, 1000=17.39%, 2000=58.15%, >=2000=20.65% 00:25:53.509 cpu : usr=0.00%, sys=0.74%, ctx=431, majf=0, minf=32769 00:25:53.509 IO depths : 1=0.5%, 2=1.1%, 4=2.2%, 8=4.3%, 16=8.7%, 32=17.4%, >=64=65.8% 00:25:53.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.509 complete : 0=0.0%, 4=98.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.7% 00:25:53.509 issued rwts: total=184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.509 job2: (groupid=0, jobs=1): err= 0: pid=2316826: Wed Jul 24 10:47:59 2024 00:25:53.509 read: IOPS=141, BW=141MiB/s (148MB/s)(1416MiB/10034msec) 00:25:53.509 slat (usec): min=40, max=2059.5k, avg=7057.88, stdev=76864.24 00:25:53.509 clat (msec): min=32, max=4790, avg=551.62, stdev=493.06 00:25:53.509 lat (msec): min=35, max=4791, avg=558.68, stdev=505.76 00:25:53.509 clat percentiles (msec): 00:25:53.509 | 1.00th=[ 82], 5.00th=[ 249], 10.00th=[ 384], 20.00th=[ 388], 00:25:53.509 | 30.00th=[ 388], 40.00th=[ 401], 50.00th=[ 414], 60.00th=[ 502], 00:25:53.509 | 70.00th=[ 651], 80.00th=[ 693], 90.00th=[ 743], 95.00th=[ 785], 00:25:53.509 | 99.00th=[ 4732], 99.50th=[ 4732], 99.90th=[ 4799], 99.95th=[ 4799], 00:25:53.509 | 99.99th=[ 4799] 00:25:53.509 bw ( KiB/s): min=159744, max=333157, per=7.98%, avg=239924.73, stdev=74153.47, samples=11 00:25:53.509 iops : min= 156, max= 325, avg=234.18, stdev=72.44, samples=11 00:25:53.509 lat (msec) : 50=0.42%, 100=1.27%, 250=3.32%, 500=54.94%, 750=32.49% 00:25:53.509 lat (msec) : 1000=6.00%, >=2000=1.55% 00:25:53.509 cpu : usr=0.09%, sys=1.79%, ctx=1300, majf=0, minf=32769 00:25:53.509 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.6% 00:25:53.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.509 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.509 issued rwts: total=1416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.509 job2: (groupid=0, jobs=1): err= 0: pid=2316827: Wed Jul 24 10:47:59 2024 00:25:53.509 read: IOPS=23, BW=23.6MiB/s (24.8MB/s)(252MiB/10672msec) 00:25:53.509 slat (usec): min=58, max=4251.5k, avg=39710.47, stdev=320470.07 00:25:53.509 clat (msec): min=579, max=9396, avg=1124.47, stdev=1468.95 00:25:53.509 lat (msec): min=581, max=9399, avg=1164.18, stdev=1571.36 00:25:53.509 clat percentiles (msec): 00:25:53.509 | 1.00th=[ 584], 5.00th=[ 584], 10.00th=[ 592], 20.00th=[ 617], 00:25:53.509 | 30.00th=[ 617], 40.00th=[ 693], 50.00th=[ 751], 60.00th=[ 885], 00:25:53.509 | 70.00th=[ 986], 80.00th=[ 1083], 90.00th=[ 1217], 95.00th=[ 2056], 00:25:53.509 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:25:53.509 | 99.99th=[ 9463] 00:25:53.509 bw ( KiB/s): min=53248, max=202752, per=4.26%, avg=128000.00, stdev=105715.29, samples=2 00:25:53.509 iops : min= 52, max= 198, avg=125.00, stdev=103.24, samples=2 00:25:53.509 lat (msec) : 750=50.00%, 1000=22.22%, 2000=22.62%, >=2000=5.16% 00:25:53.509 cpu : usr=0.01%, sys=0.70%, ctx=437, majf=0, minf=32769 00:25:53.509 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.3%, 32=12.7%, >=64=75.0% 00:25:53.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.509 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:25:53.509 issued rwts: total=252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.509 job2: (groupid=0, jobs=1): err= 0: pid=2316828: Wed Jul 24 10:47:59 2024 00:25:53.509 read: IOPS=97, BW=98.0MiB/s (103MB/s)(1052MiB/10735msec) 00:25:53.509 slat (usec): min=41, max=2089.7k, avg=10115.80, stdev=107987.04 00:25:53.509 clat (msec): min=85, max=4736, avg=1031.29, stdev=1302.95 00:25:53.509 lat (msec): min=387, max=4739, avg=1041.40, stdev=1307.50 00:25:53.509 clat percentiles (msec): 00:25:53.509 | 1.00th=[ 388], 5.00th=[ 393], 10.00th=[ 397], 20.00th=[ 397], 00:25:53.509 | 30.00th=[ 401], 40.00th=[ 405], 50.00th=[ 418], 60.00th=[ 718], 00:25:53.509 | 70.00th=[ 726], 80.00th=[ 743], 90.00th=[ 4396], 95.00th=[ 4597], 00:25:53.509 | 99.00th=[ 4665], 99.50th=[ 4732], 99.90th=[ 4732], 99.95th=[ 4732], 00:25:53.509 | 99.99th=[ 4732] 00:25:53.509 bw ( KiB/s): min=20439, max=333824, per=6.99%, avg=210185.89, stdev=109089.81, samples=9 00:25:53.509 iops : min= 19, max= 326, avg=205.11, stdev=106.70, samples=9 00:25:53.509 lat (msec) : 100=0.10%, 500=51.81%, 750=30.99%, 1000=2.85%, >=2000=14.26% 00:25:53.509 cpu : usr=0.07%, sys=1.69%, ctx=973, majf=0, minf=32769 00:25:53.509 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:25:53.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.509 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.509 issued rwts: total=1052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.509 job2: (groupid=0, jobs=1): err= 0: pid=2316829: Wed Jul 24 10:47:59 2024 00:25:53.509 read: IOPS=1, BW=1629KiB/s (1668kB/s)(17.0MiB/10688msec) 00:25:53.509 slat (msec): min=2, max=2134, avg=623.26, stdev=967.20 00:25:53.509 clat (msec): min=91, max=10636, avg=7917.19, stdev=3303.78 00:25:53.509 lat (msec): min=2188, max=10687, avg=8540.45, stdev=2674.75 00:25:53.509 clat percentiles (msec): 00:25:53.509 | 1.00th=[ 92], 5.00th=[ 92], 10.00th=[ 2198], 20.00th=[ 6409], 00:25:53.509 | 30.00th=[ 6477], 40.00th=[ 6544], 50.00th=[ 8658], 60.00th=[10537], 00:25:53.509 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:25:53.509 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:25:53.509 | 99.99th=[10671] 00:25:53.509 lat (msec) : 100=5.88%, >=2000=94.12% 00:25:53.509 cpu : usr=0.00%, sys=0.11%, ctx=71, majf=0, minf=4353 00:25:53.509 IO depths : 1=5.9%, 2=11.8%, 4=23.5%, 8=47.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:25:53.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.509 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:25:53.509 issued rwts: total=17,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.509 job2: (groupid=0, jobs=1): err= 0: pid=2316830: Wed Jul 24 10:47:59 2024 00:25:53.510 read: IOPS=2, BW=2780KiB/s (2846kB/s)(29.0MiB/10683msec) 00:25:53.510 slat (usec): min=1094, max=2063.6k, avg=365198.84, stdev=776429.12 00:25:53.510 clat (msec): min=91, max=10681, avg=5999.55, stdev=3424.91 00:25:53.510 lat (msec): min=2102, max=10682, avg=6364.75, stdev=3335.95 00:25:53.510 clat percentiles (msec): 00:25:53.510 | 1.00th=[ 92], 5.00th=[ 2106], 10.00th=[ 2165], 20.00th=[ 2198], 00:25:53.510 | 30.00th=[ 2198], 40.00th=[ 4329], 50.00th=[ 6477], 60.00th=[ 6477], 00:25:53.510 | 70.00th=[ 8658], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:25:53.510 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:25:53.510 | 99.99th=[10671] 00:25:53.510 lat (msec) : 100=3.45%, >=2000=96.55% 00:25:53.510 cpu : usr=0.00%, sys=0.19%, ctx=64, majf=0, minf=7425 00:25:53.510 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:25:53.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.510 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:25:53.510 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.510 job2: (groupid=0, jobs=1): err= 0: pid=2316831: Wed Jul 24 10:47:59 2024 00:25:53.510 read: IOPS=2, BW=2199KiB/s (2252kB/s)(23.0MiB/10711msec) 00:25:53.510 slat (usec): min=856, max=2143.0k, avg=462384.78, stdev=858877.22 00:25:53.510 clat (msec): min=75, max=10627, avg=6248.31, stdev=3547.89 00:25:53.510 lat (msec): min=2128, max=10710, avg=6710.69, stdev=3396.57 00:25:53.510 clat percentiles (msec): 00:25:53.510 | 1.00th=[ 75], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 2165], 00:25:53.510 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 6544], 60.00th=[ 8557], 00:25:53.510 | 70.00th=[ 8658], 80.00th=[10537], 90.00th=[10671], 95.00th=[10671], 00:25:53.510 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:25:53.510 | 99.99th=[10671] 00:25:53.510 lat (msec) : 100=4.35%, >=2000=95.65% 00:25:53.510 cpu : usr=0.00%, sys=0.11%, ctx=76, majf=0, minf=5889 00:25:53.510 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:25:53.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.510 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:25:53.510 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.510 job2: (groupid=0, jobs=1): err= 0: pid=2316832: Wed Jul 24 10:47:59 2024 00:25:53.510 read: IOPS=11, BW=11.9MiB/s (12.5MB/s)(127MiB/10691msec) 00:25:53.510 slat (usec): min=346, max=2072.3k, avg=83763.84, stdev=360995.24 00:25:53.510 clat (msec): min=52, max=10687, avg=5351.79, stdev=2854.69 00:25:53.510 lat (msec): min=2028, max=10690, avg=5435.55, stdev=2854.03 00:25:53.510 clat percentiles (msec): 00:25:53.510 | 1.00th=[ 2022], 5.00th=[ 2140], 10.00th=[ 3440], 20.00th=[ 3641], 00:25:53.510 | 30.00th=[ 3809], 40.00th=[ 3910], 50.00th=[ 4010], 60.00th=[ 4212], 00:25:53.510 | 70.00th=[ 4329], 80.00th=[ 8658], 90.00th=[10671], 95.00th=[10671], 00:25:53.510 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:25:53.510 | 99.99th=[10671] 00:25:53.510 lat (msec) : 100=0.79%, >=2000=99.21% 00:25:53.510 cpu : usr=0.00%, sys=0.63%, ctx=228, majf=0, minf=32513 00:25:53.510 IO depths : 1=0.8%, 2=1.6%, 4=3.1%, 8=6.3%, 16=12.6%, 32=25.2%, >=64=50.4% 00:25:53.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.510 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:53.510 issued rwts: total=127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.510 job3: (groupid=0, jobs=1): err= 0: pid=2316833: Wed Jul 24 10:47:59 2024 00:25:53.510 read: IOPS=70, BW=70.6MiB/s (74.0MB/s)(756MiB/10711msec) 00:25:53.510 slat (usec): min=55, max=2161.0k, avg=14116.56, stdev=127927.04 00:25:53.510 clat (msec): min=34, max=7067, avg=1738.24, stdev=2272.06 00:25:53.510 lat (msec): min=339, max=7069, avg=1752.36, stdev=2277.37 00:25:53.510 clat percentiles (msec): 00:25:53.510 | 1.00th=[ 347], 5.00th=[ 359], 10.00th=[ 422], 20.00th=[ 510], 00:25:53.510 | 30.00th=[ 634], 40.00th=[ 701], 50.00th=[ 793], 60.00th=[ 885], 00:25:53.510 | 70.00th=[ 969], 80.00th=[ 1020], 90.00th=[ 6879], 95.00th=[ 6946], 00:25:53.510 | 99.00th=[ 7013], 99.50th=[ 7080], 99.90th=[ 7080], 99.95th=[ 7080], 00:25:53.510 | 99.99th=[ 7080] 00:25:53.510 bw ( KiB/s): min= 4096, max=327680, per=3.89%, avg=116922.18, stdev=109526.67, samples=11 00:25:53.510 iops : min= 4, max= 320, avg=114.18, stdev=106.96, samples=11 00:25:53.510 lat (msec) : 50=0.13%, 500=11.51%, 750=34.66%, 1000=31.22%, 2000=4.63% 00:25:53.510 lat (msec) : >=2000=17.86% 00:25:53.510 cpu : usr=0.09%, sys=1.32%, ctx=1106, majf=0, minf=32769 00:25:53.510 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.7% 00:25:53.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.510 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:53.510 issued rwts: total=756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.510 job3: (groupid=0, jobs=1): err= 0: pid=2316834: Wed Jul 24 10:47:59 2024 00:25:53.510 read: IOPS=3, BW=3143KiB/s (3218kB/s)(33.0MiB/10752msec) 00:25:53.510 slat (msec): min=2, max=2109, avg=323.34, stdev=739.63 00:25:53.510 clat (msec): min=80, max=10741, avg=7966.31, stdev=3699.67 00:25:53.510 lat (msec): min=2099, max=10751, avg=8289.64, stdev=3446.59 00:25:53.510 clat percentiles (msec): 00:25:53.510 | 1.00th=[ 82], 5.00th=[ 2106], 10.00th=[ 2123], 20.00th=[ 2165], 00:25:53.510 | 30.00th=[ 6409], 40.00th=[ 8658], 50.00th=[10537], 60.00th=[10671], 00:25:53.510 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:25:53.510 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:25:53.510 | 99.99th=[10805] 00:25:53.510 lat (msec) : 100=3.03%, >=2000=96.97% 00:25:53.510 cpu : usr=0.00%, sys=0.27%, ctx=91, majf=0, minf=8449 00:25:53.510 IO depths : 1=3.0%, 2=6.1%, 4=12.1%, 8=24.2%, 16=48.5%, 32=6.1%, >=64=0.0% 00:25:53.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.510 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:53.510 issued rwts: total=33,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.510 job3: (groupid=0, jobs=1): err= 0: pid=2316835: Wed Jul 24 10:47:59 2024 00:25:53.510 read: IOPS=167, BW=167MiB/s (175MB/s)(1799MiB/10771msec) 00:25:53.510 slat (usec): min=45, max=2035.1k, avg=5941.03, stdev=82627.55 00:25:53.510 clat (msec): min=78, max=4717, avg=743.83, stdev=1170.05 00:25:53.510 lat (msec): min=254, max=4719, avg=749.77, stdev=1174.09 00:25:53.510 clat percentiles (msec): 00:25:53.510 | 1.00th=[ 255], 5.00th=[ 257], 10.00th=[ 262], 20.00th=[ 268], 00:25:53.510 | 30.00th=[ 268], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 284], 00:25:53.510 | 70.00th=[ 317], 80.00th=[ 397], 90.00th=[ 2366], 95.00th=[ 4665], 00:25:53.510 | 99.00th=[ 4732], 99.50th=[ 4732], 99.90th=[ 4732], 99.95th=[ 4732], 00:25:53.510 | 99.99th=[ 4732] 00:25:53.510 bw ( KiB/s): min=24576, max=495616, per=9.48%, avg=285013.33, stdev=199388.50, samples=12 00:25:53.510 iops : min= 24, max= 484, avg=278.50, stdev=194.90, samples=12 00:25:53.510 lat (msec) : 100=0.06%, 500=85.10%, >=2000=14.84% 00:25:53.510 cpu : usr=0.05%, sys=1.94%, ctx=1732, majf=0, minf=32769 00:25:53.511 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:25:53.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.511 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.511 issued rwts: total=1799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.511 job3: (groupid=0, jobs=1): err= 0: pid=2316836: Wed Jul 24 10:47:59 2024 00:25:53.511 read: IOPS=12, BW=12.3MiB/s (12.9MB/s)(130MiB/10607msec) 00:25:53.511 slat (usec): min=441, max=2050.9k, avg=81574.79, stdev=353916.99 00:25:53.511 clat (usec): min=1002, max=10571k, avg=4192519.52, stdev=1531097.63 00:25:53.511 lat (msec): min=2017, max=10588, avg=4274.09, stdev=1587.36 00:25:53.511 clat percentiles (msec): 00:25:53.511 | 1.00th=[ 2022], 5.00th=[ 2937], 10.00th=[ 3037], 20.00th=[ 3171], 00:25:53.511 | 30.00th=[ 3373], 40.00th=[ 3608], 50.00th=[ 3742], 60.00th=[ 3943], 00:25:53.511 | 70.00th=[ 4178], 80.00th=[ 4279], 90.00th=[ 6477], 95.00th=[ 6477], 00:25:53.511 | 99.00th=[ 8557], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:25:53.511 | 99.99th=[10537] 00:25:53.511 bw ( KiB/s): min= 4087, max= 4087, per=0.14%, avg=4087.00, stdev= 0.00, samples=1 00:25:53.511 iops : min= 3, max= 3, avg= 3.00, stdev= 0.00, samples=1 00:25:53.511 lat (msec) : 2=0.77%, >=2000=99.23% 00:25:53.511 cpu : usr=0.00%, sys=0.58%, ctx=224, majf=0, minf=32769 00:25:53.511 IO depths : 1=0.8%, 2=1.5%, 4=3.1%, 8=6.2%, 16=12.3%, 32=24.6%, >=64=51.5% 00:25:53.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.511 complete : 0=0.0%, 4=75.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=25.0% 00:25:53.511 issued rwts: total=130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.511 job3: (groupid=0, jobs=1): err= 0: pid=2316837: Wed Jul 24 10:47:59 2024 00:25:53.511 read: IOPS=41, BW=41.6MiB/s (43.6MB/s)(446MiB/10721msec) 00:25:53.511 slat (usec): min=33, max=2111.2k, avg=23822.87, stdev=185338.65 00:25:53.511 clat (msec): min=92, max=6874, avg=2428.68, stdev=2277.54 00:25:53.511 lat (msec): min=716, max=6885, avg=2452.50, stdev=2281.55 00:25:53.511 clat percentiles (msec): 00:25:53.511 | 1.00th=[ 718], 5.00th=[ 718], 10.00th=[ 718], 20.00th=[ 726], 00:25:53.511 | 30.00th=[ 726], 40.00th=[ 735], 50.00th=[ 743], 60.00th=[ 1938], 00:25:53.511 | 70.00th=[ 2869], 80.00th=[ 5000], 90.00th=[ 6611], 95.00th=[ 6745], 00:25:53.511 | 99.00th=[ 6812], 99.50th=[ 6879], 99.90th=[ 6879], 99.95th=[ 6879], 00:25:53.511 | 99.99th=[ 6879] 00:25:53.511 bw ( KiB/s): min=24576, max=180224, per=3.61%, avg=108481.83, stdev=61075.22, samples=6 00:25:53.511 iops : min= 24, max= 176, avg=105.67, stdev=59.73, samples=6 00:25:53.511 lat (msec) : 100=0.22%, 750=51.35%, 1000=4.04%, 2000=4.48%, >=2000=39.91% 00:25:53.511 cpu : usr=0.05%, sys=0.93%, ctx=417, majf=0, minf=32769 00:25:53.511 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.9% 00:25:53.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.511 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:53.511 issued rwts: total=446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.511 job3: (groupid=0, jobs=1): err= 0: pid=2316838: Wed Jul 24 10:47:59 2024 00:25:53.511 read: IOPS=19, BW=19.8MiB/s (20.8MB/s)(211MiB/10639msec) 00:25:53.511 slat (usec): min=449, max=2093.8k, avg=50246.09, stdev=275850.56 00:25:53.511 clat (msec): min=36, max=5600, avg=3603.83, stdev=1734.16 00:25:53.511 lat (msec): min=1188, max=5654, avg=3654.08, stdev=1714.57 00:25:53.511 clat percentiles (msec): 00:25:53.511 | 1.00th=[ 1183], 5.00th=[ 1200], 10.00th=[ 1217], 20.00th=[ 1250], 00:25:53.511 | 30.00th=[ 1401], 40.00th=[ 4329], 50.00th=[ 4597], 60.00th=[ 4799], 00:25:53.511 | 70.00th=[ 4866], 80.00th=[ 5067], 90.00th=[ 5336], 95.00th=[ 5470], 00:25:53.511 | 99.00th=[ 5604], 99.50th=[ 5604], 99.90th=[ 5604], 99.95th=[ 5604], 00:25:53.511 | 99.99th=[ 5604] 00:25:53.511 bw ( KiB/s): min= 2043, max=92160, per=1.41%, avg=42491.50, stdev=45320.10, samples=4 00:25:53.511 iops : min= 1, max= 90, avg=41.00, stdev=44.82, samples=4 00:25:53.511 lat (msec) : 50=0.47%, 2000=32.23%, >=2000=67.30% 00:25:53.511 cpu : usr=0.00%, sys=0.61%, ctx=347, majf=0, minf=32769 00:25:53.511 IO depths : 1=0.5%, 2=0.9%, 4=1.9%, 8=3.8%, 16=7.6%, 32=15.2%, >=64=70.1% 00:25:53.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.511 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:25:53.511 issued rwts: total=211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.511 job3: (groupid=0, jobs=1): err= 0: pid=2316839: Wed Jul 24 10:47:59 2024 00:25:53.511 read: IOPS=23, BW=23.7MiB/s (24.8MB/s)(254MiB/10731msec) 00:25:53.511 slat (usec): min=43, max=2156.9k, avg=42099.33, stdev=253813.07 00:25:53.511 clat (msec): min=36, max=9517, avg=5050.03, stdev=3443.77 00:25:53.511 lat (msec): min=1107, max=9523, avg=5092.13, stdev=3436.18 00:25:53.511 clat percentiles (msec): 00:25:53.511 | 1.00th=[ 1099], 5.00th=[ 1133], 10.00th=[ 1167], 20.00th=[ 1217], 00:25:53.511 | 30.00th=[ 1267], 40.00th=[ 3406], 50.00th=[ 4279], 60.00th=[ 6275], 00:25:53.511 | 70.00th=[ 8658], 80.00th=[ 9060], 90.00th=[ 9329], 95.00th=[ 9463], 00:25:53.511 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:25:53.511 | 99.99th=[ 9463] 00:25:53.511 bw ( KiB/s): min= 4096, max=86016, per=1.23%, avg=36864.00, stdev=30491.60, samples=7 00:25:53.511 iops : min= 4, max= 84, avg=36.00, stdev=29.78, samples=7 00:25:53.511 lat (msec) : 50=0.39%, 2000=35.43%, >=2000=64.17% 00:25:53.511 cpu : usr=0.02%, sys=0.81%, ctx=487, majf=0, minf=32769 00:25:53.511 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.1%, 16=6.3%, 32=12.6%, >=64=75.2% 00:25:53.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.511 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:25:53.511 issued rwts: total=254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.511 job3: (groupid=0, jobs=1): err= 0: pid=2316840: Wed Jul 24 10:47:59 2024 00:25:53.511 read: IOPS=29, BW=29.0MiB/s (30.5MB/s)(309MiB/10639msec) 00:25:53.511 slat (usec): min=90, max=4130.6k, avg=34305.91, stdev=273107.48 00:25:53.511 clat (msec): min=35, max=9168, avg=4130.57, stdev=3075.78 00:25:53.511 lat (msec): min=825, max=9174, avg=4164.88, stdev=3076.65 00:25:53.511 clat percentiles (msec): 00:25:53.511 | 1.00th=[ 852], 5.00th=[ 894], 10.00th=[ 919], 20.00th=[ 944], 00:25:53.511 | 30.00th=[ 1011], 40.00th=[ 3205], 50.00th=[ 3641], 60.00th=[ 5537], 00:25:53.511 | 70.00th=[ 5873], 80.00th=[ 8658], 90.00th=[ 8926], 95.00th=[ 9060], 00:25:53.511 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:25:53.511 | 99.99th=[ 9194] 00:25:53.511 bw ( KiB/s): min= 2048, max=118784, per=1.76%, avg=52955.43, stdev=46347.41, samples=7 00:25:53.511 iops : min= 2, max= 116, avg=51.71, stdev=45.26, samples=7 00:25:53.511 lat (msec) : 50=0.32%, 1000=23.95%, 2000=14.89%, >=2000=60.84% 00:25:53.511 cpu : usr=0.01%, sys=0.91%, ctx=482, majf=0, minf=32769 00:25:53.511 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.4%, >=64=79.6% 00:25:53.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.511 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:25:53.511 issued rwts: total=309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.511 job3: (groupid=0, jobs=1): err= 0: pid=2316841: Wed Jul 24 10:47:59 2024 00:25:53.511 read: IOPS=20, BW=21.0MiB/s (22.0MB/s)(225MiB/10734msec) 00:25:53.511 slat (usec): min=50, max=2050.1k, avg=47332.14, stdev=251230.51 00:25:53.511 clat (msec): min=82, max=6458, avg=4603.43, stdev=1177.75 00:25:53.511 lat (msec): min=2132, max=6459, avg=4650.76, stdev=1137.68 00:25:53.511 clat percentiles (msec): 00:25:53.511 | 1.00th=[ 2299], 5.00th=[ 2937], 10.00th=[ 3104], 20.00th=[ 3373], 00:25:53.511 | 30.00th=[ 3708], 40.00th=[ 4329], 50.00th=[ 4732], 60.00th=[ 4933], 00:25:53.511 | 70.00th=[ 5336], 80.00th=[ 5873], 90.00th=[ 6208], 95.00th=[ 6409], 00:25:53.511 | 99.00th=[ 6477], 99.50th=[ 6477], 99.90th=[ 6477], 99.95th=[ 6477], 00:25:53.511 | 99.99th=[ 6477] 00:25:53.511 bw ( KiB/s): min= 2052, max=53248, per=1.10%, avg=33107.83, stdev=22932.87, samples=6 00:25:53.511 iops : min= 2, max= 52, avg=32.17, stdev=22.63, samples=6 00:25:53.511 lat (msec) : 100=0.44%, >=2000=99.56% 00:25:53.511 cpu : usr=0.02%, sys=0.78%, ctx=554, majf=0, minf=32769 00:25:53.511 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.1%, 32=14.2%, >=64=72.0% 00:25:53.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.511 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:25:53.511 issued rwts: total=225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.511 job3: (groupid=0, jobs=1): err= 0: pid=2316842: Wed Jul 24 10:47:59 2024 00:25:53.511 read: IOPS=61, BW=61.3MiB/s (64.3MB/s)(658MiB/10729msec) 00:25:53.511 slat (usec): min=49, max=2045.8k, avg=16285.48, stdev=118830.01 00:25:53.511 clat (msec): min=8, max=6338, avg=2003.49, stdev=1777.31 00:25:53.511 lat (msec): min=642, max=6358, avg=2019.77, stdev=1787.34 00:25:53.511 clat percentiles (msec): 00:25:53.511 | 1.00th=[ 642], 5.00th=[ 651], 10.00th=[ 676], 20.00th=[ 709], 00:25:53.511 | 30.00th=[ 743], 40.00th=[ 768], 50.00th=[ 802], 60.00th=[ 1670], 00:25:53.511 | 70.00th=[ 2265], 80.00th=[ 3708], 90.00th=[ 4866], 95.00th=[ 6074], 00:25:53.511 | 99.00th=[ 6208], 99.50th=[ 6208], 99.90th=[ 6342], 99.95th=[ 6342], 00:25:53.511 | 99.99th=[ 6342] 00:25:53.511 bw ( KiB/s): min= 2048, max=194560, per=3.01%, avg=90453.33, stdev=64656.40, samples=12 00:25:53.511 iops : min= 2, max= 190, avg=88.33, stdev=63.14, samples=12 00:25:53.511 lat (msec) : 10=0.15%, 750=33.13%, 1000=23.10%, 2000=10.18%, >=2000=33.43% 00:25:53.511 cpu : usr=0.04%, sys=1.41%, ctx=732, majf=0, minf=32769 00:25:53.511 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.9%, >=64=90.4% 00:25:53.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.511 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:53.511 issued rwts: total=658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.512 job3: (groupid=0, jobs=1): err= 0: pid=2316843: Wed Jul 24 10:47:59 2024 00:25:53.512 read: IOPS=9, BW=9.85MiB/s (10.3MB/s)(106MiB/10757msec) 00:25:53.512 slat (usec): min=579, max=2079.7k, avg=101136.29, stdev=400280.99 00:25:53.512 clat (msec): min=36, max=10755, avg=7360.81, stdev=2639.95 00:25:53.512 lat (msec): min=2078, max=10756, avg=7461.95, stdev=2560.84 00:25:53.512 clat percentiles (msec): 00:25:53.512 | 1.00th=[ 2072], 5.00th=[ 4178], 10.00th=[ 4329], 20.00th=[ 5671], 00:25:53.512 | 30.00th=[ 5873], 40.00th=[ 6007], 50.00th=[ 6141], 60.00th=[ 6409], 00:25:53.512 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10805], 95.00th=[10805], 00:25:53.512 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:25:53.512 | 99.99th=[10805] 00:25:53.512 lat (msec) : 50=0.94%, >=2000=99.06% 00:25:53.512 cpu : usr=0.00%, sys=0.61%, ctx=226, majf=0, minf=27137 00:25:53.512 IO depths : 1=0.9%, 2=1.9%, 4=3.8%, 8=7.5%, 16=15.1%, 32=30.2%, >=64=40.6% 00:25:53.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.512 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:53.512 issued rwts: total=106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.512 job3: (groupid=0, jobs=1): err= 0: pid=2316844: Wed Jul 24 10:47:59 2024 00:25:53.512 read: IOPS=103, BW=103MiB/s (108MB/s)(1036MiB/10052msec) 00:25:53.512 slat (usec): min=40, max=1932.3k, avg=9649.30, stdev=88023.84 00:25:53.512 clat (msec): min=49, max=4812, avg=949.21, stdev=938.62 00:25:53.512 lat (msec): min=51, max=6745, avg=958.86, stdev=952.01 00:25:53.512 clat percentiles (msec): 00:25:53.512 | 1.00th=[ 88], 5.00th=[ 211], 10.00th=[ 372], 20.00th=[ 397], 00:25:53.512 | 30.00th=[ 418], 40.00th=[ 642], 50.00th=[ 667], 60.00th=[ 718], 00:25:53.512 | 70.00th=[ 751], 80.00th=[ 827], 90.00th=[ 2735], 95.00th=[ 3440], 00:25:53.512 | 99.00th=[ 3507], 99.50th=[ 3540], 99.90th=[ 4799], 99.95th=[ 4799], 00:25:53.512 | 99.99th=[ 4799] 00:25:53.512 bw ( KiB/s): min=12263, max=323584, per=5.16%, avg=155102.17, stdev=104332.15, samples=12 00:25:53.512 iops : min= 11, max= 316, avg=151.33, stdev=101.99, samples=12 00:25:53.512 lat (msec) : 50=0.10%, 100=1.64%, 250=4.54%, 500=28.28%, 750=34.56% 00:25:53.512 lat (msec) : 1000=13.51%, 2000=3.86%, >=2000=13.51% 00:25:53.512 cpu : usr=0.08%, sys=1.30%, ctx=1223, majf=0, minf=32769 00:25:53.512 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=93.9% 00:25:53.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.512 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.512 issued rwts: total=1036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.512 job3: (groupid=0, jobs=1): err= 0: pid=2316845: Wed Jul 24 10:47:59 2024 00:25:53.512 read: IOPS=3, BW=3622KiB/s (3709kB/s)(38.0MiB/10744msec) 00:25:53.512 slat (usec): min=656, max=2083.3k, avg=280154.74, stdev=689978.41 00:25:53.512 clat (msec): min=97, max=10742, avg=6386.26, stdev=3445.31 00:25:53.512 lat (msec): min=2122, max=10743, avg=6666.41, stdev=3351.71 00:25:53.512 clat percentiles (msec): 00:25:53.512 | 1.00th=[ 99], 5.00th=[ 2123], 10.00th=[ 2140], 20.00th=[ 2198], 00:25:53.512 | 30.00th=[ 4329], 40.00th=[ 4329], 50.00th=[ 4396], 60.00th=[ 8557], 00:25:53.512 | 70.00th=[ 8658], 80.00th=[10671], 90.00th=[10805], 95.00th=[10805], 00:25:53.512 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:25:53.512 | 99.99th=[10805] 00:25:53.512 lat (msec) : 100=2.63%, >=2000=97.37% 00:25:53.512 cpu : usr=0.02%, sys=0.23%, ctx=87, majf=0, minf=9729 00:25:53.512 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:25:53.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.512 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:53.512 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.512 job4: (groupid=0, jobs=1): err= 0: pid=2316846: Wed Jul 24 10:47:59 2024 00:25:53.512 read: IOPS=3, BW=3806KiB/s (3897kB/s)(40.0MiB/10762msec) 00:25:53.512 slat (usec): min=355, max=2110.7k, avg=266361.93, stdev=660407.08 00:25:53.512 clat (msec): min=106, max=10752, avg=5632.83, stdev=3038.70 00:25:53.512 lat (msec): min=2043, max=10761, avg=5899.20, stdev=3008.72 00:25:53.512 clat percentiles (msec): 00:25:53.512 | 1.00th=[ 107], 5.00th=[ 2039], 10.00th=[ 2140], 20.00th=[ 2140], 00:25:53.512 | 30.00th=[ 4245], 40.00th=[ 4329], 50.00th=[ 4329], 60.00th=[ 6409], 00:25:53.512 | 70.00th=[ 6409], 80.00th=[ 8658], 90.00th=[10671], 95.00th=[10671], 00:25:53.512 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:25:53.512 | 99.99th=[10805] 00:25:53.512 lat (msec) : 250=2.50%, >=2000=97.50% 00:25:53.512 cpu : usr=0.00%, sys=0.20%, ctx=96, majf=0, minf=10241 00:25:53.512 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:25:53.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.512 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:53.512 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.512 job4: (groupid=0, jobs=1): err= 0: pid=2316847: Wed Jul 24 10:47:59 2024 00:25:53.512 read: IOPS=6, BW=6174KiB/s (6322kB/s)(65.0MiB/10781msec) 00:25:53.512 slat (usec): min=451, max=2110.7k, avg=164476.60, stdev=528510.63 00:25:53.512 clat (msec): min=89, max=10778, avg=8724.91, stdev=2606.89 00:25:53.512 lat (msec): min=2138, max=10780, avg=8889.39, stdev=2380.99 00:25:53.512 clat percentiles (msec): 00:25:53.512 | 1.00th=[ 90], 5.00th=[ 2198], 10.00th=[ 6477], 20.00th=[ 6477], 00:25:53.512 | 30.00th=[ 6544], 40.00th=[10134], 50.00th=[10268], 60.00th=[10402], 00:25:53.512 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10671], 95.00th=[10805], 00:25:53.512 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:25:53.512 | 99.99th=[10805] 00:25:53.512 lat (msec) : 100=1.54%, >=2000=98.46% 00:25:53.512 cpu : usr=0.00%, sys=0.40%, ctx=199, majf=0, minf=16641 00:25:53.512 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.3%, 16=24.6%, 32=49.2%, >=64=3.1% 00:25:53.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.512 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:53.512 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.512 job4: (groupid=0, jobs=1): err= 0: pid=2316848: Wed Jul 24 10:47:59 2024 00:25:53.512 read: IOPS=16, BW=16.0MiB/s (16.8MB/s)(173MiB/10807msec) 00:25:53.512 slat (usec): min=32, max=2097.5k, avg=61947.68, stdev=307506.19 00:25:53.512 clat (msec): min=88, max=8452, avg=4350.23, stdev=2171.57 00:25:53.512 lat (msec): min=2144, max=8478, avg=4412.18, stdev=2176.27 00:25:53.512 clat percentiles (msec): 00:25:53.512 | 1.00th=[ 2140], 5.00th=[ 2232], 10.00th=[ 2467], 20.00th=[ 2668], 00:25:53.512 | 30.00th=[ 2970], 40.00th=[ 3205], 50.00th=[ 3440], 60.00th=[ 3809], 00:25:53.512 | 70.00th=[ 4329], 80.00th=[ 8020], 90.00th=[ 8154], 95.00th=[ 8288], 00:25:53.512 | 99.00th=[ 8423], 99.50th=[ 8423], 99.90th=[ 8423], 99.95th=[ 8423], 00:25:53.512 | 99.99th=[ 8423] 00:25:53.512 bw ( KiB/s): min= 2048, max=61440, per=1.02%, avg=30720.00, stdev=29748.92, samples=3 00:25:53.512 iops : min= 2, max= 60, avg=30.00, stdev=29.05, samples=3 00:25:53.512 lat (msec) : 100=0.58%, >=2000=99.42% 00:25:53.512 cpu : usr=0.00%, sys=0.73%, ctx=360, majf=0, minf=32769 00:25:53.512 IO depths : 1=0.6%, 2=1.2%, 4=2.3%, 8=4.6%, 16=9.2%, 32=18.5%, >=64=63.6% 00:25:53.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.512 complete : 0=0.0%, 4=97.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.1% 00:25:53.512 issued rwts: total=173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.512 job4: (groupid=0, jobs=1): err= 0: pid=2316849: Wed Jul 24 10:47:59 2024 00:25:53.512 read: IOPS=122, BW=122MiB/s (128MB/s)(1317MiB/10790msec) 00:25:53.512 slat (usec): min=42, max=2078.6k, avg=8105.43, stdev=75446.31 00:25:53.512 clat (msec): min=107, max=3188, avg=994.64, stdev=841.40 00:25:53.512 lat (msec): min=385, max=3197, avg=1002.75, stdev=844.10 00:25:53.512 clat percentiles (msec): 00:25:53.512 | 1.00th=[ 384], 5.00th=[ 388], 10.00th=[ 388], 20.00th=[ 393], 00:25:53.512 | 30.00th=[ 401], 40.00th=[ 558], 50.00th=[ 667], 60.00th=[ 735], 00:25:53.512 | 70.00th=[ 810], 80.00th=[ 1167], 90.00th=[ 2567], 95.00th=[ 3037], 00:25:53.512 | 99.00th=[ 3171], 99.50th=[ 3171], 99.90th=[ 3171], 99.95th=[ 3205], 00:25:53.512 | 99.99th=[ 3205] 00:25:53.512 bw ( KiB/s): min=40960, max=337920, per=6.23%, avg=187313.23, stdev=95930.88, samples=13 00:25:53.512 iops : min= 40, max= 330, avg=182.92, stdev=93.68, samples=13 00:25:53.512 lat (msec) : 250=0.08%, 500=37.21%, 750=25.82%, 1000=7.67%, 2000=12.15% 00:25:53.512 lat (msec) : >=2000=17.08% 00:25:53.512 cpu : usr=0.04%, sys=1.88%, ctx=1459, majf=0, minf=32769 00:25:53.512 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.2% 00:25:53.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.512 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.512 issued rwts: total=1317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.512 job4: (groupid=0, jobs=1): err= 0: pid=2316850: Wed Jul 24 10:47:59 2024 00:25:53.512 read: IOPS=35, BW=35.7MiB/s (37.5MB/s)(384MiB/10747msec) 00:25:53.512 slat (usec): min=54, max=2090.9k, avg=27756.78, stdev=204471.83 00:25:53.512 clat (msec): min=85, max=7035, avg=2726.69, stdev=2791.25 00:25:53.512 lat (msec): min=603, max=7035, avg=2754.44, stdev=2792.36 00:25:53.513 clat percentiles (msec): 00:25:53.513 | 1.00th=[ 600], 5.00th=[ 609], 10.00th=[ 609], 20.00th=[ 617], 00:25:53.513 | 30.00th=[ 617], 40.00th=[ 634], 50.00th=[ 659], 60.00th=[ 927], 00:25:53.513 | 70.00th=[ 6342], 80.00th=[ 6678], 90.00th=[ 6879], 95.00th=[ 6946], 00:25:53.513 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:25:53.513 | 99.99th=[ 7013] 00:25:53.513 bw ( KiB/s): min= 6144, max=208896, per=3.49%, avg=104857.60, stdev=94357.03, samples=5 00:25:53.513 iops : min= 6, max= 204, avg=102.40, stdev=92.15, samples=5 00:25:53.513 lat (msec) : 100=0.26%, 750=52.86%, 1000=8.33%, 2000=1.04%, >=2000=37.50% 00:25:53.513 cpu : usr=0.03%, sys=1.10%, ctx=427, majf=0, minf=32769 00:25:53.513 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.3%, >=64=83.6% 00:25:53.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.513 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:53.513 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.513 job4: (groupid=0, jobs=1): err= 0: pid=2316851: Wed Jul 24 10:47:59 2024 00:25:53.513 read: IOPS=44, BW=44.1MiB/s (46.3MB/s)(476MiB/10791msec) 00:25:53.513 slat (usec): min=42, max=2073.4k, avg=22425.77, stdev=183146.06 00:25:53.513 clat (msec): min=113, max=8921, avg=2732.10, stdev=3041.11 00:25:53.513 lat (msec): min=499, max=8939, avg=2754.52, stdev=3049.96 00:25:53.513 clat percentiles (msec): 00:25:53.513 | 1.00th=[ 502], 5.00th=[ 502], 10.00th=[ 502], 20.00th=[ 510], 00:25:53.513 | 30.00th=[ 514], 40.00th=[ 535], 50.00th=[ 693], 60.00th=[ 1003], 00:25:53.513 | 70.00th=[ 4329], 80.00th=[ 6275], 90.00th=[ 8792], 95.00th=[ 8792], 00:25:53.513 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:25:53.513 | 99.99th=[ 8926] 00:25:53.513 bw ( KiB/s): min= 4096, max=258048, per=2.63%, avg=79185.67, stdev=100286.64, samples=9 00:25:53.513 iops : min= 4, max= 252, avg=77.22, stdev=98.01, samples=9 00:25:53.513 lat (msec) : 250=0.21%, 500=1.47%, 750=51.26%, 1000=6.93%, 2000=0.42% 00:25:53.513 lat (msec) : >=2000=39.71% 00:25:53.513 cpu : usr=0.02%, sys=1.11%, ctx=549, majf=0, minf=32769 00:25:53.513 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.7%, >=64=86.8% 00:25:53.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.513 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:53.513 issued rwts: total=476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.513 job4: (groupid=0, jobs=1): err= 0: pid=2316852: Wed Jul 24 10:47:59 2024 00:25:53.513 read: IOPS=316, BW=316MiB/s (332MB/s)(3179MiB/10054msec) 00:25:53.513 slat (usec): min=39, max=2027.5k, avg=3144.35, stdev=49532.31 00:25:53.513 clat (msec): min=44, max=4508, avg=313.18, stdev=590.97 00:25:53.513 lat (msec): min=72, max=4510, avg=316.33, stdev=595.76 00:25:53.513 clat percentiles (msec): 00:25:53.513 | 1.00th=[ 129], 5.00th=[ 130], 10.00th=[ 131], 20.00th=[ 131], 00:25:53.513 | 30.00th=[ 132], 40.00th=[ 132], 50.00th=[ 144], 60.00th=[ 259], 00:25:53.513 | 70.00th=[ 309], 80.00th=[ 376], 90.00th=[ 401], 95.00th=[ 439], 00:25:53.513 | 99.00th=[ 4463], 99.50th=[ 4530], 99.90th=[ 4530], 99.95th=[ 4530], 00:25:53.513 | 99.99th=[ 4530] 00:25:53.513 bw ( KiB/s): min=24576, max=995328, per=17.33%, avg=520874.67, stdev=308686.27, samples=12 00:25:53.513 iops : min= 24, max= 972, avg=508.67, stdev=301.45, samples=12 00:25:53.513 lat (msec) : 50=0.03%, 100=0.50%, 250=53.85%, 500=43.32%, 750=0.06% 00:25:53.513 lat (msec) : >=2000=2.23% 00:25:53.513 cpu : usr=0.10%, sys=3.20%, ctx=3010, majf=0, minf=32137 00:25:53.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:25:53.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.513 issued rwts: total=3179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.513 job4: (groupid=0, jobs=1): err= 0: pid=2316853: Wed Jul 24 10:47:59 2024 00:25:53.513 read: IOPS=10, BW=10.8MiB/s (11.4MB/s)(116MiB/10702msec) 00:25:53.513 slat (usec): min=410, max=2155.7k, avg=91342.06, stdev=360702.42 00:25:53.513 clat (msec): min=105, max=10666, avg=6307.33, stdev=3432.76 00:25:53.513 lat (msec): min=2041, max=10701, avg=6398.67, stdev=3407.17 00:25:53.513 clat percentiles (msec): 00:25:53.513 | 1.00th=[ 2039], 5.00th=[ 2123], 10.00th=[ 2140], 20.00th=[ 3440], 00:25:53.513 | 30.00th=[ 3675], 40.00th=[ 3876], 50.00th=[ 4144], 60.00th=[ 8658], 00:25:53.513 | 70.00th=[10134], 80.00th=[10268], 90.00th=[10402], 95.00th=[10537], 00:25:53.513 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:25:53.513 | 99.99th=[10671] 00:25:53.513 lat (msec) : 250=0.86%, >=2000=99.14% 00:25:53.513 cpu : usr=0.00%, sys=0.60%, ctx=273, majf=0, minf=29697 00:25:53.513 IO depths : 1=0.9%, 2=1.7%, 4=3.4%, 8=6.9%, 16=13.8%, 32=27.6%, >=64=45.7% 00:25:53.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.513 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:53.513 issued rwts: total=116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.513 job4: (groupid=0, jobs=1): err= 0: pid=2316854: Wed Jul 24 10:47:59 2024 00:25:53.513 read: IOPS=3, BW=3981KiB/s (4077kB/s)(42.0MiB/10803msec) 00:25:53.513 slat (usec): min=586, max=2127.7k, avg=255071.64, stdev=667751.96 00:25:53.513 clat (msec): min=89, max=10800, avg=9121.05, stdev=2868.31 00:25:53.513 lat (msec): min=2138, max=10802, avg=9376.12, stdev=2497.96 00:25:53.513 clat percentiles (msec): 00:25:53.513 | 1.00th=[ 89], 5.00th=[ 2198], 10.00th=[ 4329], 20.00th=[ 8557], 00:25:53.513 | 30.00th=[ 8658], 40.00th=[10537], 50.00th=[10671], 60.00th=[10805], 00:25:53.513 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:25:53.513 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:25:53.513 | 99.99th=[10805] 00:25:53.513 lat (msec) : 100=2.38%, >=2000=97.62% 00:25:53.513 cpu : usr=0.00%, sys=0.33%, ctx=113, majf=0, minf=10753 00:25:53.513 IO depths : 1=2.4%, 2=4.8%, 4=9.5%, 8=19.0%, 16=38.1%, 32=26.2%, >=64=0.0% 00:25:53.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.513 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:53.513 issued rwts: total=42,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.513 job4: (groupid=0, jobs=1): err= 0: pid=2316855: Wed Jul 24 10:47:59 2024 00:25:53.513 read: IOPS=27, BW=27.4MiB/s (28.7MB/s)(295MiB/10772msec) 00:25:53.513 slat (usec): min=148, max=2126.9k, avg=36217.62, stdev=241680.56 00:25:53.513 clat (msec): min=86, max=10691, avg=4448.63, stdev=3964.57 00:25:53.513 lat (msec): min=892, max=10695, avg=4484.85, stdev=3963.85 00:25:53.513 clat percentiles (msec): 00:25:53.513 | 1.00th=[ 894], 5.00th=[ 902], 10.00th=[ 919], 20.00th=[ 936], 00:25:53.513 | 30.00th=[ 944], 40.00th=[ 978], 50.00th=[ 1045], 60.00th=[ 8658], 00:25:53.513 | 70.00th=[ 8926], 80.00th=[ 9060], 90.00th=[ 9329], 95.00th=[ 9463], 00:25:53.513 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[10671], 99.95th=[10671], 00:25:53.513 | 99.99th=[10671] 00:25:53.513 bw ( KiB/s): min= 4096, max=141312, per=1.63%, avg=48859.43, stdev=61603.95, samples=7 00:25:53.513 iops : min= 4, max= 138, avg=47.71, stdev=60.16, samples=7 00:25:53.513 lat (msec) : 100=0.34%, 1000=44.41%, 2000=9.83%, >=2000=45.42% 00:25:53.513 cpu : usr=0.02%, sys=0.99%, ctx=534, majf=0, minf=32769 00:25:53.513 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.4%, 32=10.8%, >=64=78.6% 00:25:53.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.513 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:25:53.513 issued rwts: total=295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.513 job4: (groupid=0, jobs=1): err= 0: pid=2316856: Wed Jul 24 10:47:59 2024 00:25:53.513 read: IOPS=34, BW=34.6MiB/s (36.3MB/s)(370MiB/10698msec) 00:25:53.513 slat (usec): min=50, max=3762.4k, avg=28773.55, stdev=245559.20 00:25:53.513 clat (msec): min=49, max=8933, avg=3432.54, stdev=3518.52 00:25:53.513 lat (msec): min=647, max=8939, avg=3461.32, stdev=3522.49 00:25:53.513 clat percentiles (msec): 00:25:53.513 | 1.00th=[ 642], 5.00th=[ 651], 10.00th=[ 659], 20.00th=[ 676], 00:25:53.513 | 30.00th=[ 709], 40.00th=[ 827], 50.00th=[ 927], 60.00th=[ 1167], 00:25:53.513 | 70.00th=[ 6879], 80.00th=[ 8356], 90.00th=[ 8792], 95.00th=[ 8792], 00:25:53.513 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:25:53.513 | 99.99th=[ 8926] 00:25:53.513 bw ( KiB/s): min= 4096, max=196215, per=2.35%, avg=70746.14, stdev=73319.97, samples=7 00:25:53.513 iops : min= 4, max= 191, avg=69.00, stdev=71.43, samples=7 00:25:53.513 lat (msec) : 50=0.27%, 750=36.76%, 1000=18.92%, 2000=4.86%, >=2000=39.19% 00:25:53.513 cpu : usr=0.00%, sys=0.85%, ctx=453, majf=0, minf=32769 00:25:53.513 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.3%, 32=8.6%, >=64=83.0% 00:25:53.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.513 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:53.513 issued rwts: total=370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.513 job4: (groupid=0, jobs=1): err= 0: pid=2316857: Wed Jul 24 10:47:59 2024 00:25:53.513 read: IOPS=14, BW=14.6MiB/s (15.3MB/s)(157MiB/10731msec) 00:25:53.513 slat (usec): min=440, max=4227.4k, avg=67803.02, stdev=400689.58 00:25:53.513 clat (msec): min=84, max=10412, avg=4172.23, stdev=2199.11 00:25:53.513 lat (msec): min=2113, max=10496, avg=4240.03, stdev=2234.42 00:25:53.513 clat percentiles (msec): 00:25:53.513 | 1.00th=[ 2106], 5.00th=[ 2433], 10.00th=[ 2500], 20.00th=[ 2567], 00:25:53.513 | 30.00th=[ 2769], 40.00th=[ 2970], 50.00th=[ 3306], 60.00th=[ 3641], 00:25:53.513 | 70.00th=[ 3977], 80.00th=[ 8087], 90.00th=[ 8154], 95.00th=[ 8221], 00:25:53.513 | 99.00th=[ 8658], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:25:53.513 | 99.99th=[10402] 00:25:53.513 bw ( KiB/s): min= 6144, max=53248, per=0.99%, avg=29696.00, stdev=33307.56, samples=2 00:25:53.513 iops : min= 6, max= 52, avg=29.00, stdev=32.53, samples=2 00:25:53.513 lat (msec) : 100=0.64%, >=2000=99.36% 00:25:53.513 cpu : usr=0.00%, sys=0.66%, ctx=323, majf=0, minf=32769 00:25:53.513 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=5.1%, 16=10.2%, 32=20.4%, >=64=59.9% 00:25:53.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.514 complete : 0=0.0%, 4=96.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.2% 00:25:53.514 issued rwts: total=157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.514 job4: (groupid=0, jobs=1): err= 0: pid=2316858: Wed Jul 24 10:47:59 2024 00:25:53.514 read: IOPS=7, BW=7983KiB/s (8174kB/s)(83.0MiB/10647msec) 00:25:53.514 slat (usec): min=370, max=2077.2k, avg=127662.04, stdev=459778.82 00:25:53.514 clat (msec): min=50, max=10605, avg=8328.82, stdev=2967.17 00:25:53.514 lat (msec): min=2074, max=10646, avg=8456.48, stdev=2831.49 00:25:53.514 clat percentiles (msec): 00:25:53.514 | 1.00th=[ 51], 5.00th=[ 2089], 10.00th=[ 2106], 20.00th=[ 6409], 00:25:53.514 | 30.00th=[ 6477], 40.00th=[10000], 50.00th=[10000], 60.00th=[10134], 00:25:53.514 | 70.00th=[10134], 80.00th=[10268], 90.00th=[10402], 95.00th=[10537], 00:25:53.514 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:25:53.514 | 99.99th=[10671] 00:25:53.514 lat (msec) : 100=1.20%, >=2000=98.80% 00:25:53.514 cpu : usr=0.00%, sys=0.45%, ctx=203, majf=0, minf=21249 00:25:53.514 IO depths : 1=1.2%, 2=2.4%, 4=4.8%, 8=9.6%, 16=19.3%, 32=38.6%, >=64=24.1% 00:25:53.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.514 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:53.514 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.514 job5: (groupid=0, jobs=1): err= 0: pid=2316860: Wed Jul 24 10:47:59 2024 00:25:53.514 read: IOPS=18, BW=18.8MiB/s (19.7MB/s)(202MiB/10741msec) 00:25:53.514 slat (usec): min=48, max=2186.6k, avg=52607.40, stdev=300841.13 00:25:53.514 clat (msec): min=113, max=7014, avg=3401.57, stdev=1655.48 00:25:53.514 lat (msec): min=526, max=7015, avg=3454.17, stdev=1669.31 00:25:53.514 clat percentiles (msec): 00:25:53.514 | 1.00th=[ 527], 5.00th=[ 567], 10.00th=[ 584], 20.00th=[ 2140], 00:25:53.514 | 30.00th=[ 3574], 40.00th=[ 3608], 50.00th=[ 3675], 60.00th=[ 3742], 00:25:53.514 | 70.00th=[ 3809], 80.00th=[ 3910], 90.00th=[ 5000], 95.00th=[ 7013], 00:25:53.514 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:25:53.514 | 99.99th=[ 7013] 00:25:53.514 bw ( KiB/s): min= 4096, max=124678, per=1.68%, avg=50419.00, stdev=64963.99, samples=3 00:25:53.514 iops : min= 4, max= 121, avg=48.67, stdev=63.22, samples=3 00:25:53.514 lat (msec) : 250=0.50%, 500=0.50%, 750=13.37%, 2000=5.45%, >=2000=80.20% 00:25:53.514 cpu : usr=0.00%, sys=0.65%, ctx=278, majf=0, minf=32769 00:25:53.514 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=7.9%, 32=15.8%, >=64=68.8% 00:25:53.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.514 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:25:53.514 issued rwts: total=202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.514 job5: (groupid=0, jobs=1): err= 0: pid=2316861: Wed Jul 24 10:47:59 2024 00:25:53.514 read: IOPS=60, BW=60.3MiB/s (63.3MB/s)(606MiB/10046msec) 00:25:53.514 slat (usec): min=697, max=2111.3k, avg=16499.06, stdev=145109.77 00:25:53.514 clat (msec): min=44, max=7259, avg=2032.55, stdev=2581.81 00:25:53.514 lat (msec): min=46, max=7263, avg=2049.05, stdev=2589.65 00:25:53.514 clat percentiles (msec): 00:25:53.514 | 1.00th=[ 73], 5.00th=[ 230], 10.00th=[ 418], 20.00th=[ 659], 00:25:53.514 | 30.00th=[ 676], 40.00th=[ 735], 50.00th=[ 776], 60.00th=[ 793], 00:25:53.514 | 70.00th=[ 844], 80.00th=[ 5000], 90.00th=[ 7148], 95.00th=[ 7215], 00:25:53.514 | 99.00th=[ 7215], 99.50th=[ 7282], 99.90th=[ 7282], 99.95th=[ 7282], 00:25:53.514 | 99.99th=[ 7282] 00:25:53.514 bw ( KiB/s): min= 2048, max=196608, per=3.26%, avg=98099.20, stdev=78648.71, samples=10 00:25:53.514 iops : min= 2, max= 192, avg=95.80, stdev=76.81, samples=10 00:25:53.514 lat (msec) : 50=0.50%, 100=1.16%, 250=3.80%, 500=7.10%, 750=31.52% 00:25:53.514 lat (msec) : 1000=33.00%, >=2000=22.94% 00:25:53.514 cpu : usr=0.02%, sys=1.36%, ctx=1411, majf=0, minf=32769 00:25:53.514 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:25:53.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.514 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:53.514 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.514 job5: (groupid=0, jobs=1): err= 0: pid=2316862: Wed Jul 24 10:47:59 2024 00:25:53.514 read: IOPS=63, BW=63.0MiB/s (66.1MB/s)(632MiB/10031msec) 00:25:53.514 slat (usec): min=411, max=2108.6k, avg=15820.83, stdev=140206.87 00:25:53.514 clat (msec): min=29, max=7153, avg=1908.29, stdev=2484.25 00:25:53.514 lat (msec): min=30, max=7162, avg=1924.11, stdev=2492.16 00:25:53.514 clat percentiles (msec): 00:25:53.514 | 1.00th=[ 37], 5.00th=[ 114], 10.00th=[ 317], 20.00th=[ 600], 00:25:53.514 | 30.00th=[ 625], 40.00th=[ 693], 50.00th=[ 718], 60.00th=[ 760], 00:25:53.514 | 70.00th=[ 776], 80.00th=[ 4866], 90.00th=[ 7080], 95.00th=[ 7080], 00:25:53.514 | 99.00th=[ 7148], 99.50th=[ 7148], 99.90th=[ 7148], 99.95th=[ 7148], 00:25:53.514 | 99.99th=[ 7148] 00:25:53.514 bw ( KiB/s): min=20439, max=214610, per=3.82%, avg=114863.22, stdev=88278.79, samples=9 00:25:53.514 iops : min= 19, max= 209, avg=112.00, stdev=86.26, samples=9 00:25:53.514 lat (msec) : 50=2.22%, 100=2.06%, 250=4.11%, 500=6.17%, 750=41.77% 00:25:53.514 lat (msec) : 1000=20.41%, >=2000=23.26% 00:25:53.514 cpu : usr=0.02%, sys=1.15%, ctx=1419, majf=0, minf=32769 00:25:53.514 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.1%, >=64=90.0% 00:25:53.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.514 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:53.514 issued rwts: total=632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.514 job5: (groupid=0, jobs=1): err= 0: pid=2316863: Wed Jul 24 10:47:59 2024 00:25:53.514 read: IOPS=35, BW=35.6MiB/s (37.3MB/s)(381MiB/10706msec) 00:25:53.514 slat (usec): min=43, max=2085.5k, avg=27834.16, stdev=194354.68 00:25:53.514 clat (msec): min=99, max=6609, avg=2757.41, stdev=2139.75 00:25:53.514 lat (msec): min=640, max=6624, avg=2785.24, stdev=2151.35 00:25:53.514 clat percentiles (msec): 00:25:53.514 | 1.00th=[ 642], 5.00th=[ 726], 10.00th=[ 793], 20.00th=[ 902], 00:25:53.514 | 30.00th=[ 911], 40.00th=[ 978], 50.00th=[ 2668], 60.00th=[ 2735], 00:25:53.514 | 70.00th=[ 2937], 80.00th=[ 5134], 90.00th=[ 6544], 95.00th=[ 6544], 00:25:53.514 | 99.00th=[ 6611], 99.50th=[ 6611], 99.90th=[ 6611], 99.95th=[ 6611], 00:25:53.514 | 99.99th=[ 6611] 00:25:53.514 bw ( KiB/s): min=12263, max=198656, per=3.45%, avg=103597.60, stdev=69937.56, samples=5 00:25:53.514 iops : min= 11, max= 194, avg=100.80, stdev=68.74, samples=5 00:25:53.514 lat (msec) : 100=0.26%, 750=7.61%, 1000=33.07%, 2000=2.62%, >=2000=56.43% 00:25:53.514 cpu : usr=0.01%, sys=0.78%, ctx=484, majf=0, minf=32769 00:25:53.514 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.5% 00:25:53.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.514 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:53.514 issued rwts: total=381,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.514 job5: (groupid=0, jobs=1): err= 0: pid=2316864: Wed Jul 24 10:47:59 2024 00:25:53.514 read: IOPS=76, BW=76.7MiB/s (80.4MB/s)(818MiB/10666msec) 00:25:53.514 slat (usec): min=40, max=2067.0k, avg=12223.99, stdev=101975.65 00:25:53.514 clat (msec): min=520, max=4852, avg=1567.62, stdev=1172.24 00:25:53.514 lat (msec): min=521, max=6919, avg=1579.84, stdev=1183.37 00:25:53.514 clat percentiles (msec): 00:25:53.514 | 1.00th=[ 523], 5.00th=[ 531], 10.00th=[ 567], 20.00th=[ 667], 00:25:53.514 | 30.00th=[ 768], 40.00th=[ 818], 50.00th=[ 911], 60.00th=[ 1116], 00:25:53.514 | 70.00th=[ 2735], 80.00th=[ 2802], 90.00th=[ 3574], 95.00th=[ 3675], 00:25:53.514 | 99.00th=[ 4144], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:25:53.514 | 99.99th=[ 4866] 00:25:53.514 bw ( KiB/s): min=10240, max=221184, per=3.92%, avg=117932.67, stdev=74319.62, samples=12 00:25:53.514 iops : min= 10, max= 216, avg=115.17, stdev=72.58, samples=12 00:25:53.514 lat (msec) : 750=27.38%, 1000=27.02%, 2000=14.55%, >=2000=31.05% 00:25:53.514 cpu : usr=0.07%, sys=1.11%, ctx=1520, majf=0, minf=32769 00:25:53.514 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.3% 00:25:53.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.514 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.514 issued rwts: total=818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.514 job5: (groupid=0, jobs=1): err= 0: pid=2316865: Wed Jul 24 10:47:59 2024 00:25:53.514 read: IOPS=23, BW=23.3MiB/s (24.4MB/s)(247MiB/10594msec) 00:25:53.514 slat (usec): min=344, max=2151.3k, avg=40485.25, stdev=259633.38 00:25:53.514 clat (msec): min=593, max=9292, avg=1453.63, stdev=1810.61 00:25:53.515 lat (msec): min=594, max=9297, avg=1494.12, stdev=1882.88 00:25:53.515 clat percentiles (msec): 00:25:53.515 | 1.00th=[ 592], 5.00th=[ 609], 10.00th=[ 642], 20.00th=[ 768], 00:25:53.515 | 30.00th=[ 827], 40.00th=[ 852], 50.00th=[ 885], 60.00th=[ 885], 00:25:53.515 | 70.00th=[ 995], 80.00th=[ 1183], 90.00th=[ 3071], 95.00th=[ 7349], 00:25:53.515 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:25:53.515 | 99.99th=[ 9329] 00:25:53.515 bw ( KiB/s): min=65536, max=127276, per=3.21%, avg=96406.00, stdev=43656.77, samples=2 00:25:53.515 iops : min= 64, max= 124, avg=94.00, stdev=42.43, samples=2 00:25:53.515 lat (msec) : 750=16.60%, 1000=53.85%, 2000=18.62%, >=2000=10.93% 00:25:53.515 cpu : usr=0.01%, sys=0.72%, ctx=532, majf=0, minf=32769 00:25:53.515 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.5%, 32=13.0%, >=64=74.5% 00:25:53.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.515 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:25:53.515 issued rwts: total=247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.515 job5: (groupid=0, jobs=1): err= 0: pid=2316866: Wed Jul 24 10:47:59 2024 00:25:53.515 read: IOPS=7, BW=7630KiB/s (7813kB/s)(80.0MiB/10737msec) 00:25:53.515 slat (usec): min=544, max=2145.7k, avg=132777.72, stdev=485895.76 00:25:53.515 clat (msec): min=113, max=10733, avg=6874.65, stdev=1840.68 00:25:53.515 lat (msec): min=2215, max=10736, avg=7007.43, stdev=1726.40 00:25:53.515 clat percentiles (msec): 00:25:53.515 | 1.00th=[ 114], 5.00th=[ 4329], 10.00th=[ 6141], 20.00th=[ 6208], 00:25:53.515 | 30.00th=[ 6275], 40.00th=[ 6275], 50.00th=[ 6275], 60.00th=[ 6342], 00:25:53.515 | 70.00th=[ 6409], 80.00th=[ 8557], 90.00th=[10537], 95.00th=[10671], 00:25:53.515 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:25:53.515 | 99.99th=[10671] 00:25:53.515 lat (msec) : 250=1.25%, >=2000=98.75% 00:25:53.515 cpu : usr=0.00%, sys=0.40%, ctx=179, majf=0, minf=20481 00:25:53.515 IO depths : 1=1.2%, 2=2.5%, 4=5.0%, 8=10.0%, 16=20.0%, 32=40.0%, >=64=21.3% 00:25:53.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.515 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:53.515 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.515 job5: (groupid=0, jobs=1): err= 0: pid=2316867: Wed Jul 24 10:47:59 2024 00:25:53.515 read: IOPS=26, BW=26.8MiB/s (28.1MB/s)(289MiB/10767msec) 00:25:53.515 slat (usec): min=80, max=2106.3k, avg=36904.82, stdev=230230.23 00:25:53.515 clat (msec): min=99, max=8661, avg=3849.74, stdev=2375.04 00:25:53.515 lat (msec): min=1153, max=10555, avg=3886.65, stdev=2385.95 00:25:53.515 clat percentiles (msec): 00:25:53.515 | 1.00th=[ 1116], 5.00th=[ 1150], 10.00th=[ 1167], 20.00th=[ 1200], 00:25:53.515 | 30.00th=[ 1234], 40.00th=[ 3205], 50.00th=[ 3272], 60.00th=[ 4245], 00:25:53.515 | 70.00th=[ 6477], 80.00th=[ 6745], 90.00th=[ 7013], 95.00th=[ 7148], 00:25:53.515 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:25:53.515 | 99.99th=[ 8658] 00:25:53.515 bw ( KiB/s): min=22528, max=98304, per=1.83%, avg=54945.00, stdev=32966.58, samples=6 00:25:53.515 iops : min= 22, max= 96, avg=53.50, stdev=32.35, samples=6 00:25:53.515 lat (msec) : 100=0.35%, 2000=31.49%, >=2000=68.17% 00:25:53.515 cpu : usr=0.01%, sys=0.78%, ctx=469, majf=0, minf=32769 00:25:53.515 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.5%, 32=11.1%, >=64=78.2% 00:25:53.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.515 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:25:53.515 issued rwts: total=289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.515 job5: (groupid=0, jobs=1): err= 0: pid=2316868: Wed Jul 24 10:47:59 2024 00:25:53.515 read: IOPS=65, BW=65.7MiB/s (68.9MB/s)(701MiB/10666msec) 00:25:53.515 slat (usec): min=39, max=2058.1k, avg=15125.45, stdev=148570.60 00:25:53.515 clat (msec): min=60, max=6959, avg=1565.37, stdev=2196.46 00:25:53.515 lat (msec): min=348, max=6961, avg=1580.49, stdev=2203.97 00:25:53.515 clat percentiles (msec): 00:25:53.515 | 1.00th=[ 351], 5.00th=[ 351], 10.00th=[ 351], 20.00th=[ 355], 00:25:53.515 | 30.00th=[ 355], 40.00th=[ 359], 50.00th=[ 359], 60.00th=[ 384], 00:25:53.515 | 70.00th=[ 852], 80.00th=[ 2769], 90.00th=[ 6275], 95.00th=[ 6879], 00:25:53.515 | 99.00th=[ 6946], 99.50th=[ 6946], 99.90th=[ 6946], 99.95th=[ 6946], 00:25:53.515 | 99.99th=[ 6946] 00:25:53.515 bw ( KiB/s): min=24526, max=378880, per=4.88%, avg=146674.00, stdev=147488.94, samples=8 00:25:53.515 iops : min= 23, max= 370, avg=143.00, stdev=144.25, samples=8 00:25:53.515 lat (msec) : 100=0.14%, 500=67.62%, 750=0.57%, 1000=6.56%, >=2000=25.11% 00:25:53.515 cpu : usr=0.02%, sys=1.06%, ctx=653, majf=0, minf=32769 00:25:53.515 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.6%, >=64=91.0% 00:25:53.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.515 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:53.515 issued rwts: total=701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.515 job5: (groupid=0, jobs=1): err= 0: pid=2316869: Wed Jul 24 10:47:59 2024 00:25:53.515 read: IOPS=101, BW=101MiB/s (106MB/s)(1014MiB/10031msec) 00:25:53.515 slat (usec): min=37, max=2057.7k, avg=9856.79, stdev=89171.65 00:25:53.515 clat (msec): min=30, max=4959, avg=780.87, stdev=572.67 00:25:53.515 lat (msec): min=31, max=4967, avg=790.73, stdev=588.20 00:25:53.515 clat percentiles (msec): 00:25:53.515 | 1.00th=[ 44], 5.00th=[ 220], 10.00th=[ 422], 20.00th=[ 542], 00:25:53.515 | 30.00th=[ 584], 40.00th=[ 634], 50.00th=[ 768], 60.00th=[ 818], 00:25:53.515 | 70.00th=[ 869], 80.00th=[ 911], 90.00th=[ 936], 95.00th=[ 986], 00:25:53.515 | 99.00th=[ 3037], 99.50th=[ 4933], 99.90th=[ 4933], 99.95th=[ 4933], 00:25:53.515 | 99.99th=[ 4933] 00:25:53.515 bw ( KiB/s): min=26570, max=247313, per=5.49%, avg=165058.73, stdev=62179.91, samples=11 00:25:53.515 iops : min= 25, max= 241, avg=161.00, stdev=60.84, samples=11 00:25:53.515 lat (msec) : 50=1.18%, 100=1.28%, 250=3.35%, 500=5.92%, 750=37.77% 00:25:53.515 lat (msec) : 1000=46.84%, 2000=0.59%, >=2000=3.06% 00:25:53.515 cpu : usr=0.09%, sys=1.41%, ctx=909, majf=0, minf=32769 00:25:53.515 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.8% 00:25:53.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.515 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.515 issued rwts: total=1014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.515 job5: (groupid=0, jobs=1): err= 0: pid=2316870: Wed Jul 24 10:47:59 2024 00:25:53.515 read: IOPS=149, BW=150MiB/s (157MB/s)(1500MiB/10015msec) 00:25:53.515 slat (usec): min=40, max=2105.9k, avg=6664.33, stdev=73290.38 00:25:53.515 clat (msec): min=13, max=3944, avg=633.47, stdev=912.05 00:25:53.515 lat (msec): min=14, max=3950, avg=640.13, stdev=919.63 00:25:53.515 clat percentiles (msec): 00:25:53.515 | 1.00th=[ 27], 5.00th=[ 130], 10.00th=[ 257], 20.00th=[ 266], 00:25:53.515 | 30.00th=[ 268], 40.00th=[ 271], 50.00th=[ 393], 60.00th=[ 393], 00:25:53.515 | 70.00th=[ 401], 80.00th=[ 422], 90.00th=[ 1787], 95.00th=[ 3373], 00:25:53.515 | 99.00th=[ 3842], 99.50th=[ 3910], 99.90th=[ 3943], 99.95th=[ 3943], 00:25:53.515 | 99.99th=[ 3943] 00:25:53.515 bw ( KiB/s): min=24576, max=512000, per=9.35%, avg=281044.80, stdev=181880.46, samples=10 00:25:53.515 iops : min= 24, max= 500, avg=274.40, stdev=177.60, samples=10 00:25:53.515 lat (msec) : 20=0.47%, 50=1.93%, 100=1.60%, 250=5.60%, 500=78.60% 00:25:53.515 lat (msec) : 750=0.27%, 2000=2.33%, >=2000=9.20% 00:25:53.515 cpu : usr=0.06%, sys=1.73%, ctx=1674, majf=0, minf=32769 00:25:53.515 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:25:53.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.515 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.515 issued rwts: total=1500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.515 job5: (groupid=0, jobs=1): err= 0: pid=2316871: Wed Jul 24 10:47:59 2024 00:25:53.515 read: IOPS=25, BW=25.8MiB/s (27.0MB/s)(258MiB/10016msec) 00:25:53.516 slat (usec): min=458, max=2165.7k, avg=38757.77, stdev=257987.93 00:25:53.516 clat (msec): min=15, max=9291, avg=798.15, stdev=1319.50 00:25:53.516 lat (msec): min=18, max=9323, avg=836.90, stdev=1421.65 00:25:53.516 clat percentiles (msec): 00:25:53.516 | 1.00th=[ 20], 5.00th=[ 73], 10.00th=[ 161], 20.00th=[ 300], 00:25:53.516 | 30.00th=[ 443], 40.00th=[ 584], 50.00th=[ 676], 60.00th=[ 701], 00:25:53.516 | 70.00th=[ 701], 80.00th=[ 709], 90.00th=[ 743], 95.00th=[ 877], 00:25:53.516 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:25:53.516 | 99.99th=[ 9329] 00:25:53.516 bw ( KiB/s): min=75776, max=192512, per=4.46%, avg=134144.00, stdev=82544.82, samples=2 00:25:53.516 iops : min= 74, max= 188, avg=131.00, stdev=80.61, samples=2 00:25:53.516 lat (msec) : 20=1.16%, 50=2.33%, 100=2.71%, 250=9.30%, 500=20.54% 00:25:53.516 lat (msec) : 750=56.59%, 1000=2.71%, >=2000=4.65% 00:25:53.516 cpu : usr=0.00%, sys=0.74%, ctx=491, majf=0, minf=32769 00:25:53.516 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.1%, 16=6.2%, 32=12.4%, >=64=75.6% 00:25:53.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.516 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:25:53.516 issued rwts: total=258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.516 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.516 job5: (groupid=0, jobs=1): err= 0: pid=2316872: Wed Jul 24 10:47:59 2024 00:25:53.516 read: IOPS=117, BW=118MiB/s (124MB/s)(1269MiB/10773msec) 00:25:53.516 slat (usec): min=39, max=3740.6k, avg=8396.33, stdev=117020.68 00:25:53.516 clat (msec): min=113, max=6316, avg=1042.77, stdev=1395.23 00:25:53.516 lat (msec): min=252, max=6321, avg=1051.16, stdev=1402.20 00:25:53.516 clat percentiles (msec): 00:25:53.516 | 1.00th=[ 253], 5.00th=[ 255], 10.00th=[ 257], 20.00th=[ 259], 00:25:53.516 | 30.00th=[ 268], 40.00th=[ 435], 50.00th=[ 600], 60.00th=[ 642], 00:25:53.516 | 70.00th=[ 701], 80.00th=[ 810], 90.00th=[ 3842], 95.00th=[ 4212], 00:25:53.516 | 99.00th=[ 6275], 99.50th=[ 6275], 99.90th=[ 6342], 99.95th=[ 6342], 00:25:53.516 | 99.99th=[ 6342] 00:25:53.516 bw ( KiB/s): min=26624, max=501760, per=7.07%, avg=212404.45, stdev=142450.46, samples=11 00:25:53.516 iops : min= 26, max= 490, avg=207.36, stdev=139.14, samples=11 00:25:53.516 lat (msec) : 250=0.08%, 500=41.53%, 750=34.91%, 1000=6.54%, >=2000=16.94% 00:25:53.516 cpu : usr=0.06%, sys=1.33%, ctx=1350, majf=0, minf=32769 00:25:53.516 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:25:53.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.516 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.516 issued rwts: total=1269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.516 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.516 00:25:53.516 Run status group 0 (all jobs): 00:25:53.516 READ: bw=2936MiB/s (3078MB/s), 1054KiB/s-316MiB/s (1080kB/s-332MB/s), io=36.8GiB (39.6GB), run=10015-12853msec 00:25:53.516 00:25:53.516 Disk stats (read/write): 00:25:53.516 nvme0n1: ios=65850/0, merge=0/0, ticks=7773780/0, in_queue=7773780, util=98.83% 00:25:53.516 nvme1n1: ios=32613/0, merge=0/0, ticks=6511473/0, in_queue=6511473, util=99.00% 00:25:53.516 nvme2n1: ios=37227/0, merge=0/0, ticks=6596278/0, in_queue=6596278, util=98.65% 00:25:53.516 nvme3n1: ios=47459/0, merge=0/0, ticks=6609028/0, in_queue=6609028, util=98.92% 00:25:53.516 nvme4n1: ios=53125/0, merge=0/0, ticks=5429508/0, in_queue=5429508, util=99.11% 00:25:53.516 nvme5n1: ios=63706/0, merge=0/0, ticks=7268560/0, in_queue=7268560, util=99.13% 00:25:53.516 10:47:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:25:53.516 10:47:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:25:53.516 10:47:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:53.516 10:47:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:25:53.516 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:25:53.516 10:48:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:25:53.516 10:48:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:25:53.516 10:48:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:53.516 10:48:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000000 00:25:53.516 10:48:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:53.516 10:48:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000000 00:25:53.516 10:48:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:25:53.516 10:48:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:53.516 10:48:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.516 10:48:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:53.516 10:48:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.516 10:48:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:53.516 10:48:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:54.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:54.083 10:48:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:25:54.083 10:48:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:25:54.083 10:48:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:54.083 10:48:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000001 00:25:54.083 10:48:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:54.083 10:48:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000001 00:25:54.083 10:48:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:25:54.083 10:48:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:54.084 10:48:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.084 10:48:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:54.084 10:48:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.084 10:48:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:54.084 10:48:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:55.019 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:55.019 10:48:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:25:55.019 10:48:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:25:55.019 10:48:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:55.019 10:48:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000002 00:25:55.019 10:48:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:55.019 10:48:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000002 00:25:55.019 10:48:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:25:55.019 10:48:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:55.019 10:48:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.019 10:48:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:55.019 10:48:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.019 10:48:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:55.019 10:48:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:55.953 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:55.954 10:48:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:25:55.954 10:48:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:25:55.954 10:48:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:55.954 10:48:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000003 00:25:55.954 10:48:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:55.954 10:48:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000003 00:25:55.954 10:48:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:25:55.954 10:48:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:55.954 10:48:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.954 10:48:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:55.954 10:48:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.954 10:48:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:55.954 10:48:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:56.889 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:56.889 10:48:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:25:56.889 10:48:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:25:56.889 10:48:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:56.889 10:48:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000004 00:25:56.889 10:48:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:56.889 10:48:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000004 00:25:57.148 10:48:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:25:57.148 10:48:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:57.148 10:48:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.148 10:48:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:57.148 10:48:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.148 10:48:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:57.148 10:48:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:58.084 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000005 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000005 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # sync 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@120 -- # set +e 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:58.084 rmmod nvme_rdma 00:25:58.084 rmmod nvme_fabrics 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set -e 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # return 0 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # '[' -n 2315392 ']' 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@490 -- # killprocess 2315392 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@950 -- # '[' -z 2315392 ']' 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # kill -0 2315392 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # uname 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2315392 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2315392' 00:25:58.084 killing process with pid 2315392 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@969 -- # kill 2315392 00:25:58.084 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@974 -- # wait 2315392 00:25:58.343 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:58.602 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:58.602 00:25:58.602 real 0m31.953s 00:25:58.602 user 1m53.416s 00:25:58.602 sys 0m13.158s 00:25:58.602 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:58.602 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:58.602 ************************************ 00:25:58.602 END TEST nvmf_srq_overwhelm 00:25:58.602 ************************************ 00:25:58.602 10:48:05 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:25:58.602 10:48:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:58.602 10:48:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:58.602 10:48:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:58.602 ************************************ 00:25:58.602 START TEST nvmf_shutdown 00:25:58.602 ************************************ 00:25:58.602 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:25:58.602 * Looking for test storage... 00:25:58.602 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:58.603 10:48:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:58.603 ************************************ 00:25:58.603 START TEST nvmf_shutdown_tc1 00:25:58.603 ************************************ 00:25:58.603 10:48:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:25:58.603 10:48:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:25:58.603 10:48:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:58.603 10:48:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:58.603 10:48:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:58.603 10:48:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:58.603 10:48:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:58.603 10:48:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:58.603 10:48:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.603 10:48:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:58.603 10:48:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.603 10:48:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:58.603 10:48:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:58.603 10:48:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:58.603 10:48:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:26:03.877 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:26:03.877 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:26:03.877 Found net devices under 0000:da:00.0: mlx_0_0 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:26:03.877 Found net devices under 0000:da:00.1: mlx_0_1 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:03.877 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # rdma_device_init 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # uname 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:03.878 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:03.878 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:26:03.878 altname enp218s0f0np0 00:26:03.878 altname ens818f0np0 00:26:03.878 inet 192.168.100.8/24 scope global mlx_0_0 00:26:03.878 valid_lft forever preferred_lft forever 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:03.878 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:03.878 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:26:03.878 altname enp218s0f1np1 00:26:03.878 altname ens818f1np1 00:26:03.878 inet 192.168.100.9/24 scope global mlx_0_1 00:26:03.878 valid_lft forever preferred_lft forever 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:03.878 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:04.138 192.168.100.9' 00:26:04.138 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:04.138 192.168.100.9' 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # head -n 1 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:04.139 192.168.100.9' 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # tail -n +2 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # head -n 1 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2323423 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2323423 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2323423 ']' 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:04.139 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:04.139 [2024-07-24 10:48:11.463431] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:26:04.139 [2024-07-24 10:48:11.463486] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:04.139 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.139 [2024-07-24 10:48:11.520939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:04.139 [2024-07-24 10:48:11.564074] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:04.139 [2024-07-24 10:48:11.564117] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:04.139 [2024-07-24 10:48:11.564124] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:04.139 [2024-07-24 10:48:11.564129] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:04.139 [2024-07-24 10:48:11.564134] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:04.139 [2024-07-24 10:48:11.564176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:04.139 [2024-07-24 10:48:11.564264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:04.139 [2024-07-24 10:48:11.564372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.139 [2024-07-24 10:48:11.564373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:04.397 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:04.397 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:26:04.397 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:04.398 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:04.398 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:04.398 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:04.398 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:04.398 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.398 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:04.398 [2024-07-24 10:48:11.735240] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x888ac0/0x88cfb0) succeed. 00:26:04.398 [2024-07-24 10:48:11.744832] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x88a0b0/0x8ce640) succeed. 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.657 10:48:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:04.657 Malloc1 00:26:04.657 [2024-07-24 10:48:11.954599] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:04.657 Malloc2 00:26:04.657 Malloc3 00:26:04.657 Malloc4 00:26:04.916 Malloc5 00:26:04.916 Malloc6 00:26:04.916 Malloc7 00:26:04.916 Malloc8 00:26:04.916 Malloc9 00:26:04.916 Malloc10 00:26:04.916 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.916 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:04.916 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:04.916 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:05.175 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2323617 00:26:05.175 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2323617 /var/tmp/bdevperf.sock 00:26:05.175 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2323617 ']' 00:26:05.175 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:05.175 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:05.175 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:05.175 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:05.175 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:05.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:05.175 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:05.176 { 00:26:05.176 "params": { 00:26:05.176 "name": "Nvme$subsystem", 00:26:05.176 "trtype": "$TEST_TRANSPORT", 00:26:05.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:05.176 "adrfam": "ipv4", 00:26:05.176 "trsvcid": "$NVMF_PORT", 00:26:05.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:05.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:05.176 "hdgst": ${hdgst:-false}, 00:26:05.176 "ddgst": ${ddgst:-false} 00:26:05.176 }, 00:26:05.176 "method": "bdev_nvme_attach_controller" 00:26:05.176 } 00:26:05.176 EOF 00:26:05.176 )") 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:05.176 { 00:26:05.176 "params": { 00:26:05.176 "name": "Nvme$subsystem", 00:26:05.176 "trtype": "$TEST_TRANSPORT", 00:26:05.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:05.176 "adrfam": "ipv4", 00:26:05.176 "trsvcid": "$NVMF_PORT", 00:26:05.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:05.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:05.176 "hdgst": ${hdgst:-false}, 00:26:05.176 "ddgst": ${ddgst:-false} 00:26:05.176 }, 00:26:05.176 "method": "bdev_nvme_attach_controller" 00:26:05.176 } 00:26:05.176 EOF 00:26:05.176 )") 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:05.176 { 00:26:05.176 "params": { 00:26:05.176 "name": "Nvme$subsystem", 00:26:05.176 "trtype": "$TEST_TRANSPORT", 00:26:05.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:05.176 "adrfam": "ipv4", 00:26:05.176 "trsvcid": "$NVMF_PORT", 00:26:05.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:05.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:05.176 "hdgst": ${hdgst:-false}, 00:26:05.176 "ddgst": ${ddgst:-false} 00:26:05.176 }, 00:26:05.176 "method": "bdev_nvme_attach_controller" 00:26:05.176 } 00:26:05.176 EOF 00:26:05.176 )") 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:05.176 { 00:26:05.176 "params": { 00:26:05.176 "name": "Nvme$subsystem", 00:26:05.176 "trtype": "$TEST_TRANSPORT", 00:26:05.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:05.176 "adrfam": "ipv4", 00:26:05.176 "trsvcid": "$NVMF_PORT", 00:26:05.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:05.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:05.176 "hdgst": ${hdgst:-false}, 00:26:05.176 "ddgst": ${ddgst:-false} 00:26:05.176 }, 00:26:05.176 "method": "bdev_nvme_attach_controller" 00:26:05.176 } 00:26:05.176 EOF 00:26:05.176 )") 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:05.176 { 00:26:05.176 "params": { 00:26:05.176 "name": "Nvme$subsystem", 00:26:05.176 "trtype": "$TEST_TRANSPORT", 00:26:05.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:05.176 "adrfam": "ipv4", 00:26:05.176 "trsvcid": "$NVMF_PORT", 00:26:05.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:05.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:05.176 "hdgst": ${hdgst:-false}, 00:26:05.176 "ddgst": ${ddgst:-false} 00:26:05.176 }, 00:26:05.176 "method": "bdev_nvme_attach_controller" 00:26:05.176 } 00:26:05.176 EOF 00:26:05.176 )") 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:05.176 { 00:26:05.176 "params": { 00:26:05.176 "name": "Nvme$subsystem", 00:26:05.176 "trtype": "$TEST_TRANSPORT", 00:26:05.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:05.176 "adrfam": "ipv4", 00:26:05.176 "trsvcid": "$NVMF_PORT", 00:26:05.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:05.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:05.176 "hdgst": ${hdgst:-false}, 00:26:05.176 "ddgst": ${ddgst:-false} 00:26:05.176 }, 00:26:05.176 "method": "bdev_nvme_attach_controller" 00:26:05.176 } 00:26:05.176 EOF 00:26:05.176 )") 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:05.176 [2024-07-24 10:48:12.423830] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:26:05.176 [2024-07-24 10:48:12.423881] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:05.176 { 00:26:05.176 "params": { 00:26:05.176 "name": "Nvme$subsystem", 00:26:05.176 "trtype": "$TEST_TRANSPORT", 00:26:05.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:05.176 "adrfam": "ipv4", 00:26:05.176 "trsvcid": "$NVMF_PORT", 00:26:05.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:05.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:05.176 "hdgst": ${hdgst:-false}, 00:26:05.176 "ddgst": ${ddgst:-false} 00:26:05.176 }, 00:26:05.176 "method": "bdev_nvme_attach_controller" 00:26:05.176 } 00:26:05.176 EOF 00:26:05.176 )") 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:05.176 { 00:26:05.176 "params": { 00:26:05.176 "name": "Nvme$subsystem", 00:26:05.176 "trtype": "$TEST_TRANSPORT", 00:26:05.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:05.176 "adrfam": "ipv4", 00:26:05.176 "trsvcid": "$NVMF_PORT", 00:26:05.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:05.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:05.176 "hdgst": ${hdgst:-false}, 00:26:05.176 "ddgst": ${ddgst:-false} 00:26:05.176 }, 00:26:05.176 "method": "bdev_nvme_attach_controller" 00:26:05.176 } 00:26:05.176 EOF 00:26:05.176 )") 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:05.176 { 00:26:05.176 "params": { 00:26:05.176 "name": "Nvme$subsystem", 00:26:05.176 "trtype": "$TEST_TRANSPORT", 00:26:05.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:05.176 "adrfam": "ipv4", 00:26:05.176 "trsvcid": "$NVMF_PORT", 00:26:05.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:05.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:05.176 "hdgst": ${hdgst:-false}, 00:26:05.176 "ddgst": ${ddgst:-false} 00:26:05.176 }, 00:26:05.176 "method": "bdev_nvme_attach_controller" 00:26:05.176 } 00:26:05.176 EOF 00:26:05.176 )") 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:05.176 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:05.176 { 00:26:05.176 "params": { 00:26:05.176 "name": "Nvme$subsystem", 00:26:05.176 "trtype": "$TEST_TRANSPORT", 00:26:05.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:05.176 "adrfam": "ipv4", 00:26:05.176 "trsvcid": "$NVMF_PORT", 00:26:05.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:05.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:05.176 "hdgst": ${hdgst:-false}, 00:26:05.176 "ddgst": ${ddgst:-false} 00:26:05.176 }, 00:26:05.176 "method": "bdev_nvme_attach_controller" 00:26:05.176 } 00:26:05.176 EOF 00:26:05.176 )") 00:26:05.177 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:05.177 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.177 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:05.177 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:05.177 10:48:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:05.177 "params": { 00:26:05.177 "name": "Nvme1", 00:26:05.177 "trtype": "rdma", 00:26:05.177 "traddr": "192.168.100.8", 00:26:05.177 "adrfam": "ipv4", 00:26:05.177 "trsvcid": "4420", 00:26:05.177 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:05.177 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:05.177 "hdgst": false, 00:26:05.177 "ddgst": false 00:26:05.177 }, 00:26:05.177 "method": "bdev_nvme_attach_controller" 00:26:05.177 },{ 00:26:05.177 "params": { 00:26:05.177 "name": "Nvme2", 00:26:05.177 "trtype": "rdma", 00:26:05.177 "traddr": "192.168.100.8", 00:26:05.177 "adrfam": "ipv4", 00:26:05.177 "trsvcid": "4420", 00:26:05.177 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:05.177 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:05.177 "hdgst": false, 00:26:05.177 "ddgst": false 00:26:05.177 }, 00:26:05.177 "method": "bdev_nvme_attach_controller" 00:26:05.177 },{ 00:26:05.177 "params": { 00:26:05.177 "name": "Nvme3", 00:26:05.177 "trtype": "rdma", 00:26:05.177 "traddr": "192.168.100.8", 00:26:05.177 "adrfam": "ipv4", 00:26:05.177 "trsvcid": "4420", 00:26:05.177 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:05.177 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:05.177 "hdgst": false, 00:26:05.177 "ddgst": false 00:26:05.177 }, 00:26:05.177 "method": "bdev_nvme_attach_controller" 00:26:05.177 },{ 00:26:05.177 "params": { 00:26:05.177 "name": "Nvme4", 00:26:05.177 "trtype": "rdma", 00:26:05.177 "traddr": "192.168.100.8", 00:26:05.177 "adrfam": "ipv4", 00:26:05.177 "trsvcid": "4420", 00:26:05.177 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:05.177 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:05.177 "hdgst": false, 00:26:05.177 "ddgst": false 00:26:05.177 }, 00:26:05.177 "method": "bdev_nvme_attach_controller" 00:26:05.177 },{ 00:26:05.177 "params": { 00:26:05.177 "name": "Nvme5", 00:26:05.177 "trtype": "rdma", 00:26:05.177 "traddr": "192.168.100.8", 00:26:05.177 "adrfam": "ipv4", 00:26:05.177 "trsvcid": "4420", 00:26:05.177 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:05.177 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:05.177 "hdgst": false, 00:26:05.177 "ddgst": false 00:26:05.177 }, 00:26:05.177 "method": "bdev_nvme_attach_controller" 00:26:05.177 },{ 00:26:05.177 "params": { 00:26:05.177 "name": "Nvme6", 00:26:05.177 "trtype": "rdma", 00:26:05.177 "traddr": "192.168.100.8", 00:26:05.177 "adrfam": "ipv4", 00:26:05.177 "trsvcid": "4420", 00:26:05.177 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:05.177 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:05.177 "hdgst": false, 00:26:05.177 "ddgst": false 00:26:05.177 }, 00:26:05.177 "method": "bdev_nvme_attach_controller" 00:26:05.177 },{ 00:26:05.177 "params": { 00:26:05.177 "name": "Nvme7", 00:26:05.177 "trtype": "rdma", 00:26:05.177 "traddr": "192.168.100.8", 00:26:05.177 "adrfam": "ipv4", 00:26:05.177 "trsvcid": "4420", 00:26:05.177 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:05.177 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:05.177 "hdgst": false, 00:26:05.177 "ddgst": false 00:26:05.177 }, 00:26:05.177 "method": "bdev_nvme_attach_controller" 00:26:05.177 },{ 00:26:05.177 "params": { 00:26:05.177 "name": "Nvme8", 00:26:05.177 "trtype": "rdma", 00:26:05.177 "traddr": "192.168.100.8", 00:26:05.177 "adrfam": "ipv4", 00:26:05.177 "trsvcid": "4420", 00:26:05.177 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:05.177 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:05.177 "hdgst": false, 00:26:05.177 "ddgst": false 00:26:05.177 }, 00:26:05.177 "method": "bdev_nvme_attach_controller" 00:26:05.177 },{ 00:26:05.177 "params": { 00:26:05.177 "name": "Nvme9", 00:26:05.177 "trtype": "rdma", 00:26:05.177 "traddr": "192.168.100.8", 00:26:05.177 "adrfam": "ipv4", 00:26:05.177 "trsvcid": "4420", 00:26:05.177 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:05.177 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:05.177 "hdgst": false, 00:26:05.177 "ddgst": false 00:26:05.177 }, 00:26:05.177 "method": "bdev_nvme_attach_controller" 00:26:05.177 },{ 00:26:05.177 "params": { 00:26:05.177 "name": "Nvme10", 00:26:05.177 "trtype": "rdma", 00:26:05.177 "traddr": "192.168.100.8", 00:26:05.177 "adrfam": "ipv4", 00:26:05.177 "trsvcid": "4420", 00:26:05.177 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:05.177 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:05.177 "hdgst": false, 00:26:05.177 "ddgst": false 00:26:05.177 }, 00:26:05.177 "method": "bdev_nvme_attach_controller" 00:26:05.177 }' 00:26:05.177 [2024-07-24 10:48:12.482151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.177 [2024-07-24 10:48:12.522521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.112 10:48:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:06.112 10:48:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:26:06.112 10:48:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:06.112 10:48:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.112 10:48:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:06.112 10:48:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.112 10:48:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2323617 00:26:06.112 10:48:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:26:06.112 10:48:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:26:07.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2323617 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2323423 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:07.049 { 00:26:07.049 "params": { 00:26:07.049 "name": "Nvme$subsystem", 00:26:07.049 "trtype": "$TEST_TRANSPORT", 00:26:07.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.049 "adrfam": "ipv4", 00:26:07.049 "trsvcid": "$NVMF_PORT", 00:26:07.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.049 "hdgst": ${hdgst:-false}, 00:26:07.049 "ddgst": ${ddgst:-false} 00:26:07.049 }, 00:26:07.049 "method": "bdev_nvme_attach_controller" 00:26:07.049 } 00:26:07.049 EOF 00:26:07.049 )") 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:07.049 { 00:26:07.049 "params": { 00:26:07.049 "name": "Nvme$subsystem", 00:26:07.049 "trtype": "$TEST_TRANSPORT", 00:26:07.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.049 "adrfam": "ipv4", 00:26:07.049 "trsvcid": "$NVMF_PORT", 00:26:07.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.049 "hdgst": ${hdgst:-false}, 00:26:07.049 "ddgst": ${ddgst:-false} 00:26:07.049 }, 00:26:07.049 "method": "bdev_nvme_attach_controller" 00:26:07.049 } 00:26:07.049 EOF 00:26:07.049 )") 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:07.049 { 00:26:07.049 "params": { 00:26:07.049 "name": "Nvme$subsystem", 00:26:07.049 "trtype": "$TEST_TRANSPORT", 00:26:07.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.049 "adrfam": "ipv4", 00:26:07.049 "trsvcid": "$NVMF_PORT", 00:26:07.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.049 "hdgst": ${hdgst:-false}, 00:26:07.049 "ddgst": ${ddgst:-false} 00:26:07.049 }, 00:26:07.049 "method": "bdev_nvme_attach_controller" 00:26:07.049 } 00:26:07.049 EOF 00:26:07.049 )") 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:07.049 { 00:26:07.049 "params": { 00:26:07.049 "name": "Nvme$subsystem", 00:26:07.049 "trtype": "$TEST_TRANSPORT", 00:26:07.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.049 "adrfam": "ipv4", 00:26:07.049 "trsvcid": "$NVMF_PORT", 00:26:07.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.049 "hdgst": ${hdgst:-false}, 00:26:07.049 "ddgst": ${ddgst:-false} 00:26:07.049 }, 00:26:07.049 "method": "bdev_nvme_attach_controller" 00:26:07.049 } 00:26:07.049 EOF 00:26:07.049 )") 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:07.049 { 00:26:07.049 "params": { 00:26:07.049 "name": "Nvme$subsystem", 00:26:07.049 "trtype": "$TEST_TRANSPORT", 00:26:07.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.049 "adrfam": "ipv4", 00:26:07.049 "trsvcid": "$NVMF_PORT", 00:26:07.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.049 "hdgst": ${hdgst:-false}, 00:26:07.049 "ddgst": ${ddgst:-false} 00:26:07.049 }, 00:26:07.049 "method": "bdev_nvme_attach_controller" 00:26:07.049 } 00:26:07.049 EOF 00:26:07.049 )") 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:07.049 { 00:26:07.049 "params": { 00:26:07.049 "name": "Nvme$subsystem", 00:26:07.049 "trtype": "$TEST_TRANSPORT", 00:26:07.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.049 "adrfam": "ipv4", 00:26:07.049 "trsvcid": "$NVMF_PORT", 00:26:07.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.049 "hdgst": ${hdgst:-false}, 00:26:07.049 "ddgst": ${ddgst:-false} 00:26:07.049 }, 00:26:07.049 "method": "bdev_nvme_attach_controller" 00:26:07.049 } 00:26:07.049 EOF 00:26:07.049 )") 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:07.049 { 00:26:07.049 "params": { 00:26:07.049 "name": "Nvme$subsystem", 00:26:07.049 "trtype": "$TEST_TRANSPORT", 00:26:07.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.049 "adrfam": "ipv4", 00:26:07.049 "trsvcid": "$NVMF_PORT", 00:26:07.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.049 "hdgst": ${hdgst:-false}, 00:26:07.049 "ddgst": ${ddgst:-false} 00:26:07.049 }, 00:26:07.049 "method": "bdev_nvme_attach_controller" 00:26:07.049 } 00:26:07.049 EOF 00:26:07.049 )") 00:26:07.049 [2024-07-24 10:48:14.424281] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:26:07.049 [2024-07-24 10:48:14.424333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2323960 ] 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:07.049 { 00:26:07.049 "params": { 00:26:07.049 "name": "Nvme$subsystem", 00:26:07.049 "trtype": "$TEST_TRANSPORT", 00:26:07.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.049 "adrfam": "ipv4", 00:26:07.049 "trsvcid": "$NVMF_PORT", 00:26:07.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.049 "hdgst": ${hdgst:-false}, 00:26:07.049 "ddgst": ${ddgst:-false} 00:26:07.049 }, 00:26:07.049 "method": "bdev_nvme_attach_controller" 00:26:07.049 } 00:26:07.049 EOF 00:26:07.049 )") 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:07.049 { 00:26:07.049 "params": { 00:26:07.049 "name": "Nvme$subsystem", 00:26:07.049 "trtype": "$TEST_TRANSPORT", 00:26:07.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.049 "adrfam": "ipv4", 00:26:07.049 "trsvcid": "$NVMF_PORT", 00:26:07.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.049 "hdgst": ${hdgst:-false}, 00:26:07.049 "ddgst": ${ddgst:-false} 00:26:07.049 }, 00:26:07.049 "method": "bdev_nvme_attach_controller" 00:26:07.049 } 00:26:07.049 EOF 00:26:07.049 )") 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:07.049 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:07.049 { 00:26:07.049 "params": { 00:26:07.049 "name": "Nvme$subsystem", 00:26:07.049 "trtype": "$TEST_TRANSPORT", 00:26:07.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.049 "adrfam": "ipv4", 00:26:07.049 "trsvcid": "$NVMF_PORT", 00:26:07.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.049 "hdgst": ${hdgst:-false}, 00:26:07.050 "ddgst": ${ddgst:-false} 00:26:07.050 }, 00:26:07.050 "method": "bdev_nvme_attach_controller" 00:26:07.050 } 00:26:07.050 EOF 00:26:07.050 )") 00:26:07.050 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:07.050 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:07.050 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.050 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:07.050 10:48:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:07.050 "params": { 00:26:07.050 "name": "Nvme1", 00:26:07.050 "trtype": "rdma", 00:26:07.050 "traddr": "192.168.100.8", 00:26:07.050 "adrfam": "ipv4", 00:26:07.050 "trsvcid": "4420", 00:26:07.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:07.050 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:07.050 "hdgst": false, 00:26:07.050 "ddgst": false 00:26:07.050 }, 00:26:07.050 "method": "bdev_nvme_attach_controller" 00:26:07.050 },{ 00:26:07.050 "params": { 00:26:07.050 "name": "Nvme2", 00:26:07.050 "trtype": "rdma", 00:26:07.050 "traddr": "192.168.100.8", 00:26:07.050 "adrfam": "ipv4", 00:26:07.050 "trsvcid": "4420", 00:26:07.050 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:07.050 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:07.050 "hdgst": false, 00:26:07.050 "ddgst": false 00:26:07.050 }, 00:26:07.050 "method": "bdev_nvme_attach_controller" 00:26:07.050 },{ 00:26:07.050 "params": { 00:26:07.050 "name": "Nvme3", 00:26:07.050 "trtype": "rdma", 00:26:07.050 "traddr": "192.168.100.8", 00:26:07.050 "adrfam": "ipv4", 00:26:07.050 "trsvcid": "4420", 00:26:07.050 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:07.050 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:07.050 "hdgst": false, 00:26:07.050 "ddgst": false 00:26:07.050 }, 00:26:07.050 "method": "bdev_nvme_attach_controller" 00:26:07.050 },{ 00:26:07.050 "params": { 00:26:07.050 "name": "Nvme4", 00:26:07.050 "trtype": "rdma", 00:26:07.050 "traddr": "192.168.100.8", 00:26:07.050 "adrfam": "ipv4", 00:26:07.050 "trsvcid": "4420", 00:26:07.050 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:07.050 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:07.050 "hdgst": false, 00:26:07.050 "ddgst": false 00:26:07.050 }, 00:26:07.050 "method": "bdev_nvme_attach_controller" 00:26:07.050 },{ 00:26:07.050 "params": { 00:26:07.050 "name": "Nvme5", 00:26:07.050 "trtype": "rdma", 00:26:07.050 "traddr": "192.168.100.8", 00:26:07.050 "adrfam": "ipv4", 00:26:07.050 "trsvcid": "4420", 00:26:07.050 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:07.050 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:07.050 "hdgst": false, 00:26:07.050 "ddgst": false 00:26:07.050 }, 00:26:07.050 "method": "bdev_nvme_attach_controller" 00:26:07.050 },{ 00:26:07.050 "params": { 00:26:07.050 "name": "Nvme6", 00:26:07.050 "trtype": "rdma", 00:26:07.050 "traddr": "192.168.100.8", 00:26:07.050 "adrfam": "ipv4", 00:26:07.050 "trsvcid": "4420", 00:26:07.050 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:07.050 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:07.050 "hdgst": false, 00:26:07.050 "ddgst": false 00:26:07.050 }, 00:26:07.050 "method": "bdev_nvme_attach_controller" 00:26:07.050 },{ 00:26:07.050 "params": { 00:26:07.050 "name": "Nvme7", 00:26:07.050 "trtype": "rdma", 00:26:07.050 "traddr": "192.168.100.8", 00:26:07.050 "adrfam": "ipv4", 00:26:07.050 "trsvcid": "4420", 00:26:07.050 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:07.050 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:07.050 "hdgst": false, 00:26:07.050 "ddgst": false 00:26:07.050 }, 00:26:07.050 "method": "bdev_nvme_attach_controller" 00:26:07.050 },{ 00:26:07.050 "params": { 00:26:07.050 "name": "Nvme8", 00:26:07.050 "trtype": "rdma", 00:26:07.050 "traddr": "192.168.100.8", 00:26:07.050 "adrfam": "ipv4", 00:26:07.050 "trsvcid": "4420", 00:26:07.050 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:07.050 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:07.050 "hdgst": false, 00:26:07.050 "ddgst": false 00:26:07.050 }, 00:26:07.050 "method": "bdev_nvme_attach_controller" 00:26:07.050 },{ 00:26:07.050 "params": { 00:26:07.050 "name": "Nvme9", 00:26:07.050 "trtype": "rdma", 00:26:07.050 "traddr": "192.168.100.8", 00:26:07.050 "adrfam": "ipv4", 00:26:07.050 "trsvcid": "4420", 00:26:07.050 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:07.050 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:07.050 "hdgst": false, 00:26:07.050 "ddgst": false 00:26:07.050 }, 00:26:07.050 "method": "bdev_nvme_attach_controller" 00:26:07.050 },{ 00:26:07.050 "params": { 00:26:07.050 "name": "Nvme10", 00:26:07.050 "trtype": "rdma", 00:26:07.050 "traddr": "192.168.100.8", 00:26:07.050 "adrfam": "ipv4", 00:26:07.050 "trsvcid": "4420", 00:26:07.050 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:07.050 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:07.050 "hdgst": false, 00:26:07.050 "ddgst": false 00:26:07.050 }, 00:26:07.050 "method": "bdev_nvme_attach_controller" 00:26:07.050 }' 00:26:07.050 [2024-07-24 10:48:14.482727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.309 [2024-07-24 10:48:14.523158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.246 Running I/O for 1 seconds... 00:26:09.181 00:26:09.181 Latency(us) 00:26:09.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.181 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.181 Verification LBA range: start 0x0 length 0x400 00:26:09.181 Nvme1n1 : 1.16 344.29 21.52 0.00 0.00 181497.90 22843.98 219701.64 00:26:09.181 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.181 Verification LBA range: start 0x0 length 0x400 00:26:09.181 Nvme2n1 : 1.16 356.85 22.30 0.00 0.00 173500.68 22968.81 210713.84 00:26:09.181 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.181 Verification LBA range: start 0x0 length 0x400 00:26:09.181 Nvme3n1 : 1.16 384.84 24.05 0.00 0.00 158686.81 7021.71 147799.28 00:26:09.181 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.181 Verification LBA range: start 0x0 length 0x400 00:26:09.181 Nvme4n1 : 1.17 387.01 24.19 0.00 0.00 155728.96 5461.33 139810.13 00:26:09.181 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.181 Verification LBA range: start 0x0 length 0x400 00:26:09.181 Nvme5n1 : 1.17 383.96 24.00 0.00 0.00 155427.87 17975.59 128825.05 00:26:09.181 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.181 Verification LBA range: start 0x0 length 0x400 00:26:09.181 Nvme6n1 : 1.17 383.58 23.97 0.00 0.00 152562.90 22344.66 121335.22 00:26:09.181 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.181 Verification LBA range: start 0x0 length 0x400 00:26:09.181 Nvme7n1 : 1.17 383.15 23.95 0.00 0.00 150870.17 24966.10 111848.11 00:26:09.181 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.181 Verification LBA range: start 0x0 length 0x400 00:26:09.181 Nvme8n1 : 1.17 382.75 23.92 0.00 0.00 148546.18 23093.64 104358.28 00:26:09.181 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.181 Verification LBA range: start 0x0 length 0x400 00:26:09.181 Nvme9n1 : 1.17 381.91 23.87 0.00 0.00 148250.69 1810.04 102360.99 00:26:09.181 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.181 Verification LBA range: start 0x0 length 0x400 00:26:09.181 Nvme10n1 : 1.18 379.05 23.69 0.00 0.00 147287.42 8301.23 160781.65 00:26:09.181 =================================================================================================================== 00:26:09.181 Total : 3767.38 235.46 0.00 0.00 156848.44 1810.04 219701.64 00:26:09.440 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:26:09.440 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:09.440 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:09.440 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:09.440 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:09.440 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:09.440 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:26:09.440 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:09.440 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:09.440 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:26:09.440 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:09.440 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:09.440 rmmod nvme_rdma 00:26:09.440 rmmod nvme_fabrics 00:26:09.440 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:09.440 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:26:09.440 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:26:09.440 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2323423 ']' 00:26:09.440 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2323423 00:26:09.440 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 2323423 ']' 00:26:09.440 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 2323423 00:26:09.440 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:26:09.440 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:09.440 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2323423 00:26:09.698 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:09.698 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:09.698 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2323423' 00:26:09.698 killing process with pid 2323423 00:26:09.698 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 2323423 00:26:09.698 10:48:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 2323423 00:26:09.985 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:09.985 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:09.985 00:26:09.985 real 0m11.357s 00:26:09.985 user 0m27.257s 00:26:09.985 sys 0m4.958s 00:26:09.985 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:09.985 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:09.985 ************************************ 00:26:09.985 END TEST nvmf_shutdown_tc1 00:26:09.985 ************************************ 00:26:09.985 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:09.985 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:09.985 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:09.985 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:10.248 ************************************ 00:26:10.248 START TEST nvmf_shutdown_tc2 00:26:10.248 ************************************ 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:26:10.248 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:26:10.248 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:26:10.248 Found net devices under 0000:da:00.0: mlx_0_0 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:26:10.248 Found net devices under 0000:da:00.1: mlx_0_1 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:10.248 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # rdma_device_init 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # uname 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:10.249 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:10.249 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:26:10.249 altname enp218s0f0np0 00:26:10.249 altname ens818f0np0 00:26:10.249 inet 192.168.100.8/24 scope global mlx_0_0 00:26:10.249 valid_lft forever preferred_lft forever 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:10.249 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:10.249 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:26:10.249 altname enp218s0f1np1 00:26:10.249 altname ens818f1np1 00:26:10.249 inet 192.168.100.9/24 scope global mlx_0_1 00:26:10.249 valid_lft forever preferred_lft forever 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:10.249 192.168.100.9' 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:10.249 192.168.100.9' 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # head -n 1 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:10.249 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:10.249 192.168.100.9' 00:26:10.250 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # head -n 1 00:26:10.250 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # tail -n +2 00:26:10.250 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:10.250 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:10.250 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:10.250 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:10.250 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:10.250 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:10.250 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:10.250 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:10.250 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:10.250 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:10.250 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2324529 00:26:10.250 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2324529 00:26:10.250 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:10.250 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2324529 ']' 00:26:10.250 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.250 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:10.250 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.250 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:10.250 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:10.508 [2024-07-24 10:48:17.723839] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:26:10.508 [2024-07-24 10:48:17.723885] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:10.508 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.508 [2024-07-24 10:48:17.779736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:10.508 [2024-07-24 10:48:17.824037] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:10.508 [2024-07-24 10:48:17.824075] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:10.508 [2024-07-24 10:48:17.824081] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:10.508 [2024-07-24 10:48:17.824087] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:10.508 [2024-07-24 10:48:17.824092] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:10.508 [2024-07-24 10:48:17.824194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:10.508 [2024-07-24 10:48:17.824283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:10.508 [2024-07-24 10:48:17.824389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.508 [2024-07-24 10:48:17.824390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:10.508 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:10.508 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:10.508 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:10.509 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:10.509 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:10.509 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:10.509 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:10.509 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.509 10:48:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:10.767 [2024-07-24 10:48:17.980943] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1bd6ac0/0x1bdafb0) succeed. 00:26:10.767 [2024-07-24 10:48:17.990089] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1bd80b0/0x1c1c640) succeed. 00:26:10.767 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.767 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:10.767 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:10.767 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:10.767 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:10.767 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:10.767 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:10.767 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:10.767 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:10.767 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:10.767 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:10.768 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:10.768 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:10.768 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:10.768 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:10.768 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:10.768 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:10.768 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:10.768 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:10.768 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:10.768 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:10.768 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:10.768 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:10.768 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:10.768 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:10.768 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:10.768 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:10.768 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.768 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:10.768 Malloc1 00:26:10.768 [2024-07-24 10:48:18.198933] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:10.768 Malloc2 00:26:11.026 Malloc3 00:26:11.026 Malloc4 00:26:11.026 Malloc5 00:26:11.026 Malloc6 00:26:11.026 Malloc7 00:26:11.285 Malloc8 00:26:11.285 Malloc9 00:26:11.285 Malloc10 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2324801 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2324801 /var/tmp/bdevperf.sock 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2324801 ']' 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:11.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:11.285 { 00:26:11.285 "params": { 00:26:11.285 "name": "Nvme$subsystem", 00:26:11.285 "trtype": "$TEST_TRANSPORT", 00:26:11.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.285 "adrfam": "ipv4", 00:26:11.285 "trsvcid": "$NVMF_PORT", 00:26:11.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.285 "hdgst": ${hdgst:-false}, 00:26:11.285 "ddgst": ${ddgst:-false} 00:26:11.285 }, 00:26:11.285 "method": "bdev_nvme_attach_controller" 00:26:11.285 } 00:26:11.285 EOF 00:26:11.285 )") 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:11.285 { 00:26:11.285 "params": { 00:26:11.285 "name": "Nvme$subsystem", 00:26:11.285 "trtype": "$TEST_TRANSPORT", 00:26:11.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.285 "adrfam": "ipv4", 00:26:11.285 "trsvcid": "$NVMF_PORT", 00:26:11.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.285 "hdgst": ${hdgst:-false}, 00:26:11.285 "ddgst": ${ddgst:-false} 00:26:11.285 }, 00:26:11.285 "method": "bdev_nvme_attach_controller" 00:26:11.285 } 00:26:11.285 EOF 00:26:11.285 )") 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:11.285 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:11.286 { 00:26:11.286 "params": { 00:26:11.286 "name": "Nvme$subsystem", 00:26:11.286 "trtype": "$TEST_TRANSPORT", 00:26:11.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.286 "adrfam": "ipv4", 00:26:11.286 "trsvcid": "$NVMF_PORT", 00:26:11.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.286 "hdgst": ${hdgst:-false}, 00:26:11.286 "ddgst": ${ddgst:-false} 00:26:11.286 }, 00:26:11.286 "method": "bdev_nvme_attach_controller" 00:26:11.286 } 00:26:11.286 EOF 00:26:11.286 )") 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:11.286 { 00:26:11.286 "params": { 00:26:11.286 "name": "Nvme$subsystem", 00:26:11.286 "trtype": "$TEST_TRANSPORT", 00:26:11.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.286 "adrfam": "ipv4", 00:26:11.286 "trsvcid": "$NVMF_PORT", 00:26:11.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.286 "hdgst": ${hdgst:-false}, 00:26:11.286 "ddgst": ${ddgst:-false} 00:26:11.286 }, 00:26:11.286 "method": "bdev_nvme_attach_controller" 00:26:11.286 } 00:26:11.286 EOF 00:26:11.286 )") 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:11.286 { 00:26:11.286 "params": { 00:26:11.286 "name": "Nvme$subsystem", 00:26:11.286 "trtype": "$TEST_TRANSPORT", 00:26:11.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.286 "adrfam": "ipv4", 00:26:11.286 "trsvcid": "$NVMF_PORT", 00:26:11.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.286 "hdgst": ${hdgst:-false}, 00:26:11.286 "ddgst": ${ddgst:-false} 00:26:11.286 }, 00:26:11.286 "method": "bdev_nvme_attach_controller" 00:26:11.286 } 00:26:11.286 EOF 00:26:11.286 )") 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:11.286 { 00:26:11.286 "params": { 00:26:11.286 "name": "Nvme$subsystem", 00:26:11.286 "trtype": "$TEST_TRANSPORT", 00:26:11.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.286 "adrfam": "ipv4", 00:26:11.286 "trsvcid": "$NVMF_PORT", 00:26:11.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.286 "hdgst": ${hdgst:-false}, 00:26:11.286 "ddgst": ${ddgst:-false} 00:26:11.286 }, 00:26:11.286 "method": "bdev_nvme_attach_controller" 00:26:11.286 } 00:26:11.286 EOF 00:26:11.286 )") 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:11.286 [2024-07-24 10:48:18.663838] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:26:11.286 [2024-07-24 10:48:18.663890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2324801 ] 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:11.286 { 00:26:11.286 "params": { 00:26:11.286 "name": "Nvme$subsystem", 00:26:11.286 "trtype": "$TEST_TRANSPORT", 00:26:11.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.286 "adrfam": "ipv4", 00:26:11.286 "trsvcid": "$NVMF_PORT", 00:26:11.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.286 "hdgst": ${hdgst:-false}, 00:26:11.286 "ddgst": ${ddgst:-false} 00:26:11.286 }, 00:26:11.286 "method": "bdev_nvme_attach_controller" 00:26:11.286 } 00:26:11.286 EOF 00:26:11.286 )") 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:11.286 { 00:26:11.286 "params": { 00:26:11.286 "name": "Nvme$subsystem", 00:26:11.286 "trtype": "$TEST_TRANSPORT", 00:26:11.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.286 "adrfam": "ipv4", 00:26:11.286 "trsvcid": "$NVMF_PORT", 00:26:11.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.286 "hdgst": ${hdgst:-false}, 00:26:11.286 "ddgst": ${ddgst:-false} 00:26:11.286 }, 00:26:11.286 "method": "bdev_nvme_attach_controller" 00:26:11.286 } 00:26:11.286 EOF 00:26:11.286 )") 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:11.286 { 00:26:11.286 "params": { 00:26:11.286 "name": "Nvme$subsystem", 00:26:11.286 "trtype": "$TEST_TRANSPORT", 00:26:11.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.286 "adrfam": "ipv4", 00:26:11.286 "trsvcid": "$NVMF_PORT", 00:26:11.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.286 "hdgst": ${hdgst:-false}, 00:26:11.286 "ddgst": ${ddgst:-false} 00:26:11.286 }, 00:26:11.286 "method": "bdev_nvme_attach_controller" 00:26:11.286 } 00:26:11.286 EOF 00:26:11.286 )") 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:11.286 { 00:26:11.286 "params": { 00:26:11.286 "name": "Nvme$subsystem", 00:26:11.286 "trtype": "$TEST_TRANSPORT", 00:26:11.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.286 "adrfam": "ipv4", 00:26:11.286 "trsvcid": "$NVMF_PORT", 00:26:11.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.286 "hdgst": ${hdgst:-false}, 00:26:11.286 "ddgst": ${ddgst:-false} 00:26:11.286 }, 00:26:11.286 "method": "bdev_nvme_attach_controller" 00:26:11.286 } 00:26:11.286 EOF 00:26:11.286 )") 00:26:11.286 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:26:11.286 10:48:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:11.286 "params": { 00:26:11.286 "name": "Nvme1", 00:26:11.286 "trtype": "rdma", 00:26:11.286 "traddr": "192.168.100.8", 00:26:11.287 "adrfam": "ipv4", 00:26:11.287 "trsvcid": "4420", 00:26:11.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:11.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:11.287 "hdgst": false, 00:26:11.287 "ddgst": false 00:26:11.287 }, 00:26:11.287 "method": "bdev_nvme_attach_controller" 00:26:11.287 },{ 00:26:11.287 "params": { 00:26:11.287 "name": "Nvme2", 00:26:11.287 "trtype": "rdma", 00:26:11.287 "traddr": "192.168.100.8", 00:26:11.287 "adrfam": "ipv4", 00:26:11.287 "trsvcid": "4420", 00:26:11.287 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:11.287 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:11.287 "hdgst": false, 00:26:11.287 "ddgst": false 00:26:11.287 }, 00:26:11.287 "method": "bdev_nvme_attach_controller" 00:26:11.287 },{ 00:26:11.287 "params": { 00:26:11.287 "name": "Nvme3", 00:26:11.287 "trtype": "rdma", 00:26:11.287 "traddr": "192.168.100.8", 00:26:11.287 "adrfam": "ipv4", 00:26:11.287 "trsvcid": "4420", 00:26:11.287 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:11.287 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:11.287 "hdgst": false, 00:26:11.287 "ddgst": false 00:26:11.287 }, 00:26:11.287 "method": "bdev_nvme_attach_controller" 00:26:11.287 },{ 00:26:11.287 "params": { 00:26:11.287 "name": "Nvme4", 00:26:11.287 "trtype": "rdma", 00:26:11.287 "traddr": "192.168.100.8", 00:26:11.287 "adrfam": "ipv4", 00:26:11.287 "trsvcid": "4420", 00:26:11.287 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:11.287 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:11.287 "hdgst": false, 00:26:11.287 "ddgst": false 00:26:11.287 }, 00:26:11.287 "method": "bdev_nvme_attach_controller" 00:26:11.287 },{ 00:26:11.287 "params": { 00:26:11.287 "name": "Nvme5", 00:26:11.287 "trtype": "rdma", 00:26:11.287 "traddr": "192.168.100.8", 00:26:11.287 "adrfam": "ipv4", 00:26:11.287 "trsvcid": "4420", 00:26:11.287 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:11.287 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:11.287 "hdgst": false, 00:26:11.287 "ddgst": false 00:26:11.287 }, 00:26:11.287 "method": "bdev_nvme_attach_controller" 00:26:11.287 },{ 00:26:11.287 "params": { 00:26:11.287 "name": "Nvme6", 00:26:11.287 "trtype": "rdma", 00:26:11.287 "traddr": "192.168.100.8", 00:26:11.287 "adrfam": "ipv4", 00:26:11.287 "trsvcid": "4420", 00:26:11.287 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:11.287 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:11.287 "hdgst": false, 00:26:11.287 "ddgst": false 00:26:11.287 }, 00:26:11.287 "method": "bdev_nvme_attach_controller" 00:26:11.287 },{ 00:26:11.287 "params": { 00:26:11.287 "name": "Nvme7", 00:26:11.287 "trtype": "rdma", 00:26:11.287 "traddr": "192.168.100.8", 00:26:11.287 "adrfam": "ipv4", 00:26:11.287 "trsvcid": "4420", 00:26:11.287 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:11.287 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:11.287 "hdgst": false, 00:26:11.287 "ddgst": false 00:26:11.287 }, 00:26:11.287 "method": "bdev_nvme_attach_controller" 00:26:11.287 },{ 00:26:11.287 "params": { 00:26:11.287 "name": "Nvme8", 00:26:11.287 "trtype": "rdma", 00:26:11.287 "traddr": "192.168.100.8", 00:26:11.287 "adrfam": "ipv4", 00:26:11.287 "trsvcid": "4420", 00:26:11.287 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:11.287 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:11.287 "hdgst": false, 00:26:11.287 "ddgst": false 00:26:11.287 }, 00:26:11.287 "method": "bdev_nvme_attach_controller" 00:26:11.287 },{ 00:26:11.287 "params": { 00:26:11.287 "name": "Nvme9", 00:26:11.287 "trtype": "rdma", 00:26:11.287 "traddr": "192.168.100.8", 00:26:11.287 "adrfam": "ipv4", 00:26:11.287 "trsvcid": "4420", 00:26:11.287 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:11.287 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:11.287 "hdgst": false, 00:26:11.287 "ddgst": false 00:26:11.287 }, 00:26:11.287 "method": "bdev_nvme_attach_controller" 00:26:11.287 },{ 00:26:11.287 "params": { 00:26:11.287 "name": "Nvme10", 00:26:11.287 "trtype": "rdma", 00:26:11.287 "traddr": "192.168.100.8", 00:26:11.287 "adrfam": "ipv4", 00:26:11.287 "trsvcid": "4420", 00:26:11.287 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:11.287 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:11.287 "hdgst": false, 00:26:11.287 "ddgst": false 00:26:11.287 }, 00:26:11.287 "method": "bdev_nvme_attach_controller" 00:26:11.287 }' 00:26:11.287 [2024-07-24 10:48:18.720172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.546 [2024-07-24 10:48:18.760296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.479 Running I/O for 10 seconds... 00:26:12.479 10:48:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:12.479 10:48:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:12.479 10:48:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:12.479 10:48:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.479 10:48:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:12.479 10:48:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.479 10:48:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:12.479 10:48:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:12.479 10:48:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:12.479 10:48:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:26:12.479 10:48:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:26:12.479 10:48:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:12.479 10:48:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:12.479 10:48:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:12.479 10:48:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:12.479 10:48:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.479 10:48:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:12.479 10:48:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.479 10:48:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:26:12.479 10:48:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:26:12.479 10:48:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:12.738 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:12.738 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:12.738 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:12.738 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:12.738 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.738 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:12.996 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.996 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=163 00:26:12.996 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 163 -ge 100 ']' 00:26:12.996 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:26:12.996 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:26:12.996 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:26:12.997 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2324801 00:26:12.997 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2324801 ']' 00:26:12.997 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2324801 00:26:12.997 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:26:12.997 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:12.997 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2324801 00:26:12.997 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:12.997 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:12.997 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2324801' 00:26:12.997 killing process with pid 2324801 00:26:12.997 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2324801 00:26:12.997 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2324801 00:26:13.255 Received shutdown signal, test time was about 0.837206 seconds 00:26:13.255 00:26:13.255 Latency(us) 00:26:13.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.255 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:13.255 Verification LBA range: start 0x0 length 0x400 00:26:13.255 Nvme1n1 : 0.82 350.31 21.89 0.00 0.00 178858.34 7583.45 217704.35 00:26:13.255 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:13.255 Verification LBA range: start 0x0 length 0x400 00:26:13.255 Nvme2n1 : 0.82 349.81 21.86 0.00 0.00 175450.10 7864.32 207717.91 00:26:13.255 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:13.255 Verification LBA range: start 0x0 length 0x400 00:26:13.255 Nvme3n1 : 0.82 371.06 23.19 0.00 0.00 162480.79 4431.48 196732.83 00:26:13.255 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:13.255 Verification LBA range: start 0x0 length 0x400 00:26:13.255 Nvme4n1 : 0.83 393.45 24.59 0.00 0.00 150210.00 4244.24 130822.34 00:26:13.255 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:13.255 Verification LBA range: start 0x0 length 0x400 00:26:13.255 Nvme5n1 : 0.83 386.73 24.17 0.00 0.00 149878.88 8862.96 119837.26 00:26:13.255 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:13.255 Verification LBA range: start 0x0 length 0x400 00:26:13.255 Nvme6n1 : 0.83 386.19 24.14 0.00 0.00 146529.91 9799.19 112846.75 00:26:13.255 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:13.255 Verification LBA range: start 0x0 length 0x400 00:26:13.255 Nvme7n1 : 0.83 385.58 24.10 0.00 0.00 143921.25 10236.10 103359.63 00:26:13.255 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:13.255 Verification LBA range: start 0x0 length 0x400 00:26:13.255 Nvme8n1 : 0.83 384.95 24.06 0.00 0.00 141110.61 10735.42 95370.48 00:26:13.255 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:13.255 Verification LBA range: start 0x0 length 0x400 00:26:13.255 Nvme9n1 : 0.83 384.29 24.02 0.00 0.00 138442.85 11359.57 101861.67 00:26:13.255 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:13.255 Verification LBA range: start 0x0 length 0x400 00:26:13.255 Nvme10n1 : 0.84 306.02 19.13 0.00 0.00 170287.54 2917.91 218702.99 00:26:13.255 =================================================================================================================== 00:26:13.255 Total : 3698.38 231.15 0.00 0.00 154924.76 2917.91 218702.99 00:26:13.514 10:48:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2324529 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:14.449 rmmod nvme_rdma 00:26:14.449 rmmod nvme_fabrics 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2324529 ']' 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2324529 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2324529 ']' 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2324529 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2324529 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2324529' 00:26:14.449 killing process with pid 2324529 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2324529 00:26:14.449 10:48:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2324529 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:15.081 00:26:15.081 real 0m4.846s 00:26:15.081 user 0m19.440s 00:26:15.081 sys 0m0.966s 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:15.081 ************************************ 00:26:15.081 END TEST nvmf_shutdown_tc2 00:26:15.081 ************************************ 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:15.081 ************************************ 00:26:15.081 START TEST nvmf_shutdown_tc3 00:26:15.081 ************************************ 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:15.081 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:26:15.082 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:26:15.082 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:26:15.082 Found net devices under 0000:da:00.0: mlx_0_0 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:26:15.082 Found net devices under 0000:da:00.1: mlx_0_1 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # rdma_device_init 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # uname 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:15.082 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:15.082 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:26:15.082 altname enp218s0f0np0 00:26:15.082 altname ens818f0np0 00:26:15.082 inet 192.168.100.8/24 scope global mlx_0_0 00:26:15.082 valid_lft forever preferred_lft forever 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:15.082 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:15.082 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:26:15.082 altname enp218s0f1np1 00:26:15.082 altname ens818f1np1 00:26:15.082 inet 192.168.100.9/24 scope global mlx_0_1 00:26:15.082 valid_lft forever preferred_lft forever 00:26:15.082 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:15.083 192.168.100.9' 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:15.083 192.168.100.9' 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # head -n 1 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:15.083 192.168.100.9' 00:26:15.083 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # tail -n +2 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # head -n 1 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2325557 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2325557 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2325557 ']' 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:15.342 [2024-07-24 10:48:22.598091] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:26:15.342 [2024-07-24 10:48:22.598131] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:15.342 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.342 [2024-07-24 10:48:22.647231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:15.342 [2024-07-24 10:48:22.689186] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:15.342 [2024-07-24 10:48:22.689222] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:15.342 [2024-07-24 10:48:22.689229] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:15.342 [2024-07-24 10:48:22.689235] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:15.342 [2024-07-24 10:48:22.689239] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:15.342 [2024-07-24 10:48:22.689337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:15.342 [2024-07-24 10:48:22.689422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:15.342 [2024-07-24 10:48:22.689531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.342 [2024-07-24 10:48:22.689532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:15.342 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:15.601 [2024-07-24 10:48:22.844673] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2092ac0/0x2096fb0) succeed. 00:26:15.601 [2024-07-24 10:48:22.853760] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20940b0/0x20d8640) succeed. 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:15.601 10:48:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:15.601 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:15.601 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:15.601 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:15.601 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:15.601 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:15.601 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:15.601 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:15.601 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.601 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:15.601 Malloc1 00:26:15.860 [2024-07-24 10:48:23.061187] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:15.860 Malloc2 00:26:15.860 Malloc3 00:26:15.860 Malloc4 00:26:15.860 Malloc5 00:26:15.860 Malloc6 00:26:15.860 Malloc7 00:26:16.119 Malloc8 00:26:16.119 Malloc9 00:26:16.119 Malloc10 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2325665 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2325665 /var/tmp/bdevperf.sock 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2325665 ']' 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:16.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:16.119 { 00:26:16.119 "params": { 00:26:16.119 "name": "Nvme$subsystem", 00:26:16.119 "trtype": "$TEST_TRANSPORT", 00:26:16.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.119 "adrfam": "ipv4", 00:26:16.119 "trsvcid": "$NVMF_PORT", 00:26:16.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.119 "hdgst": ${hdgst:-false}, 00:26:16.119 "ddgst": ${ddgst:-false} 00:26:16.119 }, 00:26:16.119 "method": "bdev_nvme_attach_controller" 00:26:16.119 } 00:26:16.119 EOF 00:26:16.119 )") 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:16.119 { 00:26:16.119 "params": { 00:26:16.119 "name": "Nvme$subsystem", 00:26:16.119 "trtype": "$TEST_TRANSPORT", 00:26:16.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.119 "adrfam": "ipv4", 00:26:16.119 "trsvcid": "$NVMF_PORT", 00:26:16.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.119 "hdgst": ${hdgst:-false}, 00:26:16.119 "ddgst": ${ddgst:-false} 00:26:16.119 }, 00:26:16.119 "method": "bdev_nvme_attach_controller" 00:26:16.119 } 00:26:16.119 EOF 00:26:16.119 )") 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:16.119 { 00:26:16.119 "params": { 00:26:16.119 "name": "Nvme$subsystem", 00:26:16.119 "trtype": "$TEST_TRANSPORT", 00:26:16.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.119 "adrfam": "ipv4", 00:26:16.119 "trsvcid": "$NVMF_PORT", 00:26:16.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.119 "hdgst": ${hdgst:-false}, 00:26:16.119 "ddgst": ${ddgst:-false} 00:26:16.119 }, 00:26:16.119 "method": "bdev_nvme_attach_controller" 00:26:16.119 } 00:26:16.119 EOF 00:26:16.119 )") 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:16.119 { 00:26:16.119 "params": { 00:26:16.119 "name": "Nvme$subsystem", 00:26:16.119 "trtype": "$TEST_TRANSPORT", 00:26:16.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.119 "adrfam": "ipv4", 00:26:16.119 "trsvcid": "$NVMF_PORT", 00:26:16.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.119 "hdgst": ${hdgst:-false}, 00:26:16.119 "ddgst": ${ddgst:-false} 00:26:16.119 }, 00:26:16.119 "method": "bdev_nvme_attach_controller" 00:26:16.119 } 00:26:16.119 EOF 00:26:16.119 )") 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:16.119 { 00:26:16.119 "params": { 00:26:16.119 "name": "Nvme$subsystem", 00:26:16.119 "trtype": "$TEST_TRANSPORT", 00:26:16.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.119 "adrfam": "ipv4", 00:26:16.119 "trsvcid": "$NVMF_PORT", 00:26:16.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.119 "hdgst": ${hdgst:-false}, 00:26:16.119 "ddgst": ${ddgst:-false} 00:26:16.119 }, 00:26:16.119 "method": "bdev_nvme_attach_controller" 00:26:16.119 } 00:26:16.119 EOF 00:26:16.119 )") 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:16.119 { 00:26:16.119 "params": { 00:26:16.119 "name": "Nvme$subsystem", 00:26:16.119 "trtype": "$TEST_TRANSPORT", 00:26:16.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.119 "adrfam": "ipv4", 00:26:16.119 "trsvcid": "$NVMF_PORT", 00:26:16.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.119 "hdgst": ${hdgst:-false}, 00:26:16.119 "ddgst": ${ddgst:-false} 00:26:16.119 }, 00:26:16.119 "method": "bdev_nvme_attach_controller" 00:26:16.119 } 00:26:16.119 EOF 00:26:16.119 )") 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:16.119 [2024-07-24 10:48:23.529003] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:26:16.119 [2024-07-24 10:48:23.529050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2325665 ] 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:16.119 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:16.119 { 00:26:16.119 "params": { 00:26:16.119 "name": "Nvme$subsystem", 00:26:16.119 "trtype": "$TEST_TRANSPORT", 00:26:16.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.119 "adrfam": "ipv4", 00:26:16.119 "trsvcid": "$NVMF_PORT", 00:26:16.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.119 "hdgst": ${hdgst:-false}, 00:26:16.120 "ddgst": ${ddgst:-false} 00:26:16.120 }, 00:26:16.120 "method": "bdev_nvme_attach_controller" 00:26:16.120 } 00:26:16.120 EOF 00:26:16.120 )") 00:26:16.120 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:16.120 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:16.120 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:16.120 { 00:26:16.120 "params": { 00:26:16.120 "name": "Nvme$subsystem", 00:26:16.120 "trtype": "$TEST_TRANSPORT", 00:26:16.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.120 "adrfam": "ipv4", 00:26:16.120 "trsvcid": "$NVMF_PORT", 00:26:16.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.120 "hdgst": ${hdgst:-false}, 00:26:16.120 "ddgst": ${ddgst:-false} 00:26:16.120 }, 00:26:16.120 "method": "bdev_nvme_attach_controller" 00:26:16.120 } 00:26:16.120 EOF 00:26:16.120 )") 00:26:16.120 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:16.120 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:16.120 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:16.120 { 00:26:16.120 "params": { 00:26:16.120 "name": "Nvme$subsystem", 00:26:16.120 "trtype": "$TEST_TRANSPORT", 00:26:16.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.120 "adrfam": "ipv4", 00:26:16.120 "trsvcid": "$NVMF_PORT", 00:26:16.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.120 "hdgst": ${hdgst:-false}, 00:26:16.120 "ddgst": ${ddgst:-false} 00:26:16.120 }, 00:26:16.120 "method": "bdev_nvme_attach_controller" 00:26:16.120 } 00:26:16.120 EOF 00:26:16.120 )") 00:26:16.120 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:16.120 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:16.120 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:16.120 { 00:26:16.120 "params": { 00:26:16.120 "name": "Nvme$subsystem", 00:26:16.120 "trtype": "$TEST_TRANSPORT", 00:26:16.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.120 "adrfam": "ipv4", 00:26:16.120 "trsvcid": "$NVMF_PORT", 00:26:16.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.120 "hdgst": ${hdgst:-false}, 00:26:16.120 "ddgst": ${ddgst:-false} 00:26:16.120 }, 00:26:16.120 "method": "bdev_nvme_attach_controller" 00:26:16.120 } 00:26:16.120 EOF 00:26:16.120 )") 00:26:16.120 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.120 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:16.120 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:26:16.120 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:26:16.120 10:48:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:16.120 "params": { 00:26:16.120 "name": "Nvme1", 00:26:16.120 "trtype": "rdma", 00:26:16.120 "traddr": "192.168.100.8", 00:26:16.120 "adrfam": "ipv4", 00:26:16.120 "trsvcid": "4420", 00:26:16.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:16.120 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:16.120 "hdgst": false, 00:26:16.120 "ddgst": false 00:26:16.120 }, 00:26:16.120 "method": "bdev_nvme_attach_controller" 00:26:16.120 },{ 00:26:16.120 "params": { 00:26:16.120 "name": "Nvme2", 00:26:16.120 "trtype": "rdma", 00:26:16.120 "traddr": "192.168.100.8", 00:26:16.120 "adrfam": "ipv4", 00:26:16.120 "trsvcid": "4420", 00:26:16.120 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:16.120 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:16.120 "hdgst": false, 00:26:16.120 "ddgst": false 00:26:16.120 }, 00:26:16.120 "method": "bdev_nvme_attach_controller" 00:26:16.120 },{ 00:26:16.120 "params": { 00:26:16.120 "name": "Nvme3", 00:26:16.120 "trtype": "rdma", 00:26:16.120 "traddr": "192.168.100.8", 00:26:16.120 "adrfam": "ipv4", 00:26:16.120 "trsvcid": "4420", 00:26:16.120 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:16.120 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:16.120 "hdgst": false, 00:26:16.120 "ddgst": false 00:26:16.120 }, 00:26:16.120 "method": "bdev_nvme_attach_controller" 00:26:16.120 },{ 00:26:16.120 "params": { 00:26:16.120 "name": "Nvme4", 00:26:16.120 "trtype": "rdma", 00:26:16.120 "traddr": "192.168.100.8", 00:26:16.120 "adrfam": "ipv4", 00:26:16.120 "trsvcid": "4420", 00:26:16.120 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:16.120 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:16.120 "hdgst": false, 00:26:16.120 "ddgst": false 00:26:16.120 }, 00:26:16.120 "method": "bdev_nvme_attach_controller" 00:26:16.120 },{ 00:26:16.120 "params": { 00:26:16.120 "name": "Nvme5", 00:26:16.120 "trtype": "rdma", 00:26:16.120 "traddr": "192.168.100.8", 00:26:16.120 "adrfam": "ipv4", 00:26:16.120 "trsvcid": "4420", 00:26:16.120 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:16.120 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:16.120 "hdgst": false, 00:26:16.120 "ddgst": false 00:26:16.120 }, 00:26:16.120 "method": "bdev_nvme_attach_controller" 00:26:16.120 },{ 00:26:16.120 "params": { 00:26:16.120 "name": "Nvme6", 00:26:16.120 "trtype": "rdma", 00:26:16.120 "traddr": "192.168.100.8", 00:26:16.120 "adrfam": "ipv4", 00:26:16.120 "trsvcid": "4420", 00:26:16.120 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:16.120 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:16.120 "hdgst": false, 00:26:16.120 "ddgst": false 00:26:16.120 }, 00:26:16.120 "method": "bdev_nvme_attach_controller" 00:26:16.120 },{ 00:26:16.120 "params": { 00:26:16.120 "name": "Nvme7", 00:26:16.120 "trtype": "rdma", 00:26:16.120 "traddr": "192.168.100.8", 00:26:16.120 "adrfam": "ipv4", 00:26:16.120 "trsvcid": "4420", 00:26:16.120 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:16.120 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:16.120 "hdgst": false, 00:26:16.120 "ddgst": false 00:26:16.120 }, 00:26:16.120 "method": "bdev_nvme_attach_controller" 00:26:16.120 },{ 00:26:16.120 "params": { 00:26:16.120 "name": "Nvme8", 00:26:16.120 "trtype": "rdma", 00:26:16.120 "traddr": "192.168.100.8", 00:26:16.120 "adrfam": "ipv4", 00:26:16.120 "trsvcid": "4420", 00:26:16.120 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:16.120 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:16.120 "hdgst": false, 00:26:16.120 "ddgst": false 00:26:16.120 }, 00:26:16.120 "method": "bdev_nvme_attach_controller" 00:26:16.120 },{ 00:26:16.120 "params": { 00:26:16.120 "name": "Nvme9", 00:26:16.120 "trtype": "rdma", 00:26:16.120 "traddr": "192.168.100.8", 00:26:16.120 "adrfam": "ipv4", 00:26:16.120 "trsvcid": "4420", 00:26:16.120 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:16.120 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:16.120 "hdgst": false, 00:26:16.120 "ddgst": false 00:26:16.120 }, 00:26:16.120 "method": "bdev_nvme_attach_controller" 00:26:16.120 },{ 00:26:16.120 "params": { 00:26:16.120 "name": "Nvme10", 00:26:16.120 "trtype": "rdma", 00:26:16.120 "traddr": "192.168.100.8", 00:26:16.120 "adrfam": "ipv4", 00:26:16.120 "trsvcid": "4420", 00:26:16.120 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:16.120 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:16.120 "hdgst": false, 00:26:16.120 "ddgst": false 00:26:16.120 }, 00:26:16.120 "method": "bdev_nvme_attach_controller" 00:26:16.120 }' 00:26:16.378 [2024-07-24 10:48:23.584841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.378 [2024-07-24 10:48:23.624842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.308 Running I/O for 10 seconds... 00:26:17.308 10:48:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:17.308 10:48:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:26:17.308 10:48:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:17.308 10:48:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.308 10:48:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:17.308 10:48:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.308 10:48:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:17.308 10:48:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:17.308 10:48:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:17.308 10:48:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:17.308 10:48:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:26:17.308 10:48:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:26:17.308 10:48:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:17.308 10:48:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:17.308 10:48:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:17.308 10:48:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:17.308 10:48:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.308 10:48:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:17.566 10:48:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.566 10:48:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:26:17.566 10:48:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:26:17.566 10:48:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=147 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 147 -ge 100 ']' 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2325557 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2325557 ']' 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2325557 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2325557 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2325557' 00:26:17.825 killing process with pid 2325557 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 2325557 00:26:17.825 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 2325557 00:26:18.391 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:26:18.391 10:48:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:26:18.962 [2024-07-24 10:48:26.299022] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192569c0 was disconnected and freed. reset controller. 00:26:18.962 [2024-07-24 10:48:26.301547] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256740 was disconnected and freed. reset controller. 00:26:18.962 [2024-07-24 10:48:26.303793] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192564c0 was disconnected and freed. reset controller. 00:26:18.962 [2024-07-24 10:48:26.306394] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256240 was disconnected and freed. reset controller. 00:26:18.962 [2024-07-24 10:48:26.308580] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60ee80 was disconnected and freed. reset controller. 00:26:18.962 [2024-07-24 10:48:26.310878] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806bc0 was disconnected and freed. reset controller. 00:26:18.962 [2024-07-24 10:48:26.310983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a79fd80 len:0x10000 key:0x184100 00:26:18.962 [2024-07-24 10:48:26.311015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.962 [2024-07-24 10:48:26.311050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a78fd00 len:0x10000 key:0x184100 00:26:18.962 [2024-07-24 10:48:26.311073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.962 [2024-07-24 10:48:26.311104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a77fc80 len:0x10000 key:0x184100 00:26:18.962 [2024-07-24 10:48:26.311126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.962 [2024-07-24 10:48:26.311153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a76fc00 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.311184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.311212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a75fb80 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.311233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.311260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a74fb00 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.311282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.311309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a73fa80 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.311330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.311357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a72fa00 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.311379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.311406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a71f980 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.311428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.311454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a70f900 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.311476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.311515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ff880 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.311537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.311564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ef800 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.311586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.311612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6df780 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.311634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.311661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6cf700 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.311682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.311710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6bf680 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.311731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.311762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6af600 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.311785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.311812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a69f580 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.311841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.311855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a68f500 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.311866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.311879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a67f480 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.311888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.311901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a66f400 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.311912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.311926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a65f380 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.311937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.311950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a64f300 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.311960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.311973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a63f280 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.311983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.311995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a62f200 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.312006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.312019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.312029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.312042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a60f100 len:0x10000 key:0x184100 00:26:18.963 [2024-07-24 10:48:26.312052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.312067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9f0000 len:0x10000 key:0x183f00 00:26:18.963 [2024-07-24 10:48:26.312078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.312091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x183f00 00:26:18.963 [2024-07-24 10:48:26.312102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.312114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9cff00 len:0x10000 key:0x183f00 00:26:18.963 [2024-07-24 10:48:26.312125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.312138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9bfe80 len:0x10000 key:0x183f00 00:26:18.963 [2024-07-24 10:48:26.312148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.312161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x183f00 00:26:18.963 [2024-07-24 10:48:26.312172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.312185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a99fd80 len:0x10000 key:0x183f00 00:26:18.963 [2024-07-24 10:48:26.312196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.312209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a98fd00 len:0x10000 key:0x183f00 00:26:18.963 [2024-07-24 10:48:26.312219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.312232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a97fc80 len:0x10000 key:0x183f00 00:26:18.963 [2024-07-24 10:48:26.312243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.312257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a96fc00 len:0x10000 key:0x183f00 00:26:18.963 [2024-07-24 10:48:26.312266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.312279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x183f00 00:26:18.963 [2024-07-24 10:48:26.312290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.312303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a94fb00 len:0x10000 key:0x183f00 00:26:18.963 [2024-07-24 10:48:26.312313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.963 [2024-07-24 10:48:26.312327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a93fa80 len:0x10000 key:0x183f00 00:26:18.963 [2024-07-24 10:48:26.312337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a92fa00 len:0x10000 key:0x183f00 00:26:18.964 [2024-07-24 10:48:26.312361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x183f00 00:26:18.964 [2024-07-24 10:48:26.312383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f900 len:0x10000 key:0x183f00 00:26:18.964 [2024-07-24 10:48:26.312406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ff880 len:0x10000 key:0x183f00 00:26:18.964 [2024-07-24 10:48:26.312430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x183f00 00:26:18.964 [2024-07-24 10:48:26.312453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x183f00 00:26:18.964 [2024-07-24 10:48:26.312479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x183f00 00:26:18.964 [2024-07-24 10:48:26.312511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x183f00 00:26:18.964 [2024-07-24 10:48:26.312535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8af600 len:0x10000 key:0x183f00 00:26:18.964 [2024-07-24 10:48:26.312558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x183f00 00:26:18.964 [2024-07-24 10:48:26.312582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x183f00 00:26:18.964 [2024-07-24 10:48:26.312607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a87f480 len:0x10000 key:0x183f00 00:26:18.964 [2024-07-24 10:48:26.312630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x183f00 00:26:18.964 [2024-07-24 10:48:26.312653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a85f380 len:0x10000 key:0x183f00 00:26:18.964 [2024-07-24 10:48:26.312676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f300 len:0x10000 key:0x183f00 00:26:18.964 [2024-07-24 10:48:26.312699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a45f980 len:0x10000 key:0x183200 00:26:18.964 [2024-07-24 10:48:26.312723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cfb7000 len:0x10000 key:0x184400 00:26:18.964 [2024-07-24 10:48:26.312746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf96000 len:0x10000 key:0x184400 00:26:18.964 [2024-07-24 10:48:26.312769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000133bf000 len:0x10000 key:0x184400 00:26:18.964 [2024-07-24 10:48:26.312792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001339e000 len:0x10000 key:0x184400 00:26:18.964 [2024-07-24 10:48:26.312816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001337d000 len:0x10000 key:0x184400 00:26:18.964 [2024-07-24 10:48:26.312839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001335c000 len:0x10000 key:0x184400 00:26:18.964 [2024-07-24 10:48:26.312866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010bc9000 len:0x10000 key:0x184400 00:26:18.964 [2024-07-24 10:48:26.312888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010bea000 len:0x10000 key:0x184400 00:26:18.964 [2024-07-24 10:48:26.312912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c0b000 len:0x10000 key:0x184400 00:26:18.964 [2024-07-24 10:48:26.312935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.312948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c2c000 len:0x10000 key:0x184400 00:26:18.964 [2024-07-24 10:48:26.312959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:a3a4 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.315385] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806940 was disconnected and freed. reset controller. 00:26:18.964 [2024-07-24 10:48:26.315435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adafe00 len:0x10000 key:0x183600 00:26:18.964 [2024-07-24 10:48:26.315459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.315504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad9fd80 len:0x10000 key:0x183600 00:26:18.964 [2024-07-24 10:48:26.315529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.315556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fd00 len:0x10000 key:0x183600 00:26:18.964 [2024-07-24 10:48:26.315578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.315605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fc80 len:0x10000 key:0x183600 00:26:18.964 [2024-07-24 10:48:26.315626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.315652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad6fc00 len:0x10000 key:0x183600 00:26:18.964 [2024-07-24 10:48:26.315674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.315701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad5fb80 len:0x10000 key:0x183600 00:26:18.964 [2024-07-24 10:48:26.315722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.315763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x183600 00:26:18.964 [2024-07-24 10:48:26.315796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.315810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad3fa80 len:0x10000 key:0x183600 00:26:18.964 [2024-07-24 10:48:26.315821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.964 [2024-07-24 10:48:26.315833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad2fa00 len:0x10000 key:0x183600 00:26:18.965 [2024-07-24 10:48:26.315843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.315857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x183600 00:26:18.965 [2024-07-24 10:48:26.315867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.315880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x183600 00:26:18.965 [2024-07-24 10:48:26.315890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.315903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acff880 len:0x10000 key:0x183600 00:26:18.965 [2024-07-24 10:48:26.315913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.315927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x183600 00:26:18.965 [2024-07-24 10:48:26.315939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.315953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acdf780 len:0x10000 key:0x183600 00:26:18.965 [2024-07-24 10:48:26.315964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.315977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001accf700 len:0x10000 key:0x183600 00:26:18.965 [2024-07-24 10:48:26.315987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x183600 00:26:18.965 [2024-07-24 10:48:26.316011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acaf600 len:0x10000 key:0x183600 00:26:18.965 [2024-07-24 10:48:26.316036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac9f580 len:0x10000 key:0x183600 00:26:18.965 [2024-07-24 10:48:26.316061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183600 00:26:18.965 [2024-07-24 10:48:26.316085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac7f480 len:0x10000 key:0x183600 00:26:18.965 [2024-07-24 10:48:26.316108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x183600 00:26:18.965 [2024-07-24 10:48:26.316131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x183600 00:26:18.965 [2024-07-24 10:48:26.316155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac4f300 len:0x10000 key:0x183600 00:26:18.965 [2024-07-24 10:48:26.316179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x183600 00:26:18.965 [2024-07-24 10:48:26.316202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x183600 00:26:18.965 [2024-07-24 10:48:26.316226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac1f180 len:0x10000 key:0x183600 00:26:18.965 [2024-07-24 10:48:26.316249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x183600 00:26:18.965 [2024-07-24 10:48:26.316272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183e00 00:26:18.965 [2024-07-24 10:48:26.316295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x183e00 00:26:18.965 [2024-07-24 10:48:26.316320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afcff00 len:0x10000 key:0x183e00 00:26:18.965 [2024-07-24 10:48:26.316344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183e00 00:26:18.965 [2024-07-24 10:48:26.316366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183e00 00:26:18.965 [2024-07-24 10:48:26.316390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x183e00 00:26:18.965 [2024-07-24 10:48:26.316413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183e00 00:26:18.965 [2024-07-24 10:48:26.316435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x183e00 00:26:18.965 [2024-07-24 10:48:26.316461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183e00 00:26:18.965 [2024-07-24 10:48:26.316486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183e00 00:26:18.965 [2024-07-24 10:48:26.316516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af4fb00 len:0x10000 key:0x183e00 00:26:18.965 [2024-07-24 10:48:26.316541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183e00 00:26:18.965 [2024-07-24 10:48:26.316566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x183e00 00:26:18.965 [2024-07-24 10:48:26.316591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0x183e00 00:26:18.965 [2024-07-24 10:48:26.316616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefe00 len:0x10000 key:0x183800 00:26:18.965 [2024-07-24 10:48:26.316640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cab0000 len:0x10000 key:0x184400 00:26:18.965 [2024-07-24 10:48:26.316665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.965 [2024-07-24 10:48:26.316678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cad1000 len:0x10000 key:0x184400 00:26:18.965 [2024-07-24 10:48:26.316688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.316703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000caf2000 len:0x10000 key:0x184400 00:26:18.966 [2024-07-24 10:48:26.316715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.316729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb13000 len:0x10000 key:0x184400 00:26:18.966 [2024-07-24 10:48:26.316740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.316753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e7e000 len:0x10000 key:0x184400 00:26:18.966 [2024-07-24 10:48:26.316764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.316778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e5d000 len:0x10000 key:0x184400 00:26:18.966 [2024-07-24 10:48:26.316789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.316803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e3c000 len:0x10000 key:0x184400 00:26:18.966 [2024-07-24 10:48:26.316813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.316826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e1b000 len:0x10000 key:0x184400 00:26:18.966 [2024-07-24 10:48:26.316839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.316853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b4c6000 len:0x10000 key:0x184400 00:26:18.966 [2024-07-24 10:48:26.316865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.316881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b4e7000 len:0x10000 key:0x184400 00:26:18.966 [2024-07-24 10:48:26.316892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.316906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c8a0000 len:0x10000 key:0x184400 00:26:18.966 [2024-07-24 10:48:26.316918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.316931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011aff000 len:0x10000 key:0x184400 00:26:18.966 [2024-07-24 10:48:26.316941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.316955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ade000 len:0x10000 key:0x184400 00:26:18.966 [2024-07-24 10:48:26.316966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.316979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011abd000 len:0x10000 key:0x184400 00:26:18.966 [2024-07-24 10:48:26.316989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.317005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011a9c000 len:0x10000 key:0x184400 00:26:18.966 [2024-07-24 10:48:26.317015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.317028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011a7b000 len:0x10000 key:0x184400 00:26:18.966 [2024-07-24 10:48:26.317038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.317051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ed9000 len:0x10000 key:0x184400 00:26:18.966 [2024-07-24 10:48:26.317062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.317075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012eb8000 len:0x10000 key:0x184400 00:26:18.966 [2024-07-24 10:48:26.317085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.317098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e97000 len:0x10000 key:0x184400 00:26:18.966 [2024-07-24 10:48:26.317108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.317122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e76000 len:0x10000 key:0x184400 00:26:18.966 [2024-07-24 10:48:26.317134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.317147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e55000 len:0x10000 key:0x184400 00:26:18.966 [2024-07-24 10:48:26.317158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.317171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e34000 len:0x10000 key:0x184400 00:26:18.966 [2024-07-24 10:48:26.317181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:0f12 p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.319876] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8066c0 was disconnected and freed. reset controller. 00:26:18.966 [2024-07-24 10:48:26.319930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1f0000 len:0x10000 key:0x183c00 00:26:18.966 [2024-07-24 10:48:26.319942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.319959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1dff80 len:0x10000 key:0x183c00 00:26:18.966 [2024-07-24 10:48:26.319970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.319983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1cff00 len:0x10000 key:0x183c00 00:26:18.966 [2024-07-24 10:48:26.319994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.320007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x183c00 00:26:18.966 [2024-07-24 10:48:26.320017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.320030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x183c00 00:26:18.966 [2024-07-24 10:48:26.320041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.320054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b19fd80 len:0x10000 key:0x183c00 00:26:18.966 [2024-07-24 10:48:26.320064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.320077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b18fd00 len:0x10000 key:0x183c00 00:26:18.966 [2024-07-24 10:48:26.320088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.320101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b17fc80 len:0x10000 key:0x183c00 00:26:18.966 [2024-07-24 10:48:26.320111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.320123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16fc00 len:0x10000 key:0x183c00 00:26:18.966 [2024-07-24 10:48:26.320137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.320151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15fb80 len:0x10000 key:0x183c00 00:26:18.966 [2024-07-24 10:48:26.320161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.320174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b14fb00 len:0x10000 key:0x183c00 00:26:18.966 [2024-07-24 10:48:26.320184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.966 [2024-07-24 10:48:26.320199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x183c00 00:26:18.966 [2024-07-24 10:48:26.320211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x183c00 00:26:18.967 [2024-07-24 10:48:26.320235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f980 len:0x10000 key:0x183c00 00:26:18.967 [2024-07-24 10:48:26.320259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f900 len:0x10000 key:0x183c00 00:26:18.967 [2024-07-24 10:48:26.320282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ff880 len:0x10000 key:0x183c00 00:26:18.967 [2024-07-24 10:48:26.320305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x183c00 00:26:18.967 [2024-07-24 10:48:26.320329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x183c00 00:26:18.967 [2024-07-24 10:48:26.320352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x183c00 00:26:18.967 [2024-07-24 10:48:26.320376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x183c00 00:26:18.967 [2024-07-24 10:48:26.320401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x183c00 00:26:18.967 [2024-07-24 10:48:26.320425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x183c00 00:26:18.967 [2024-07-24 10:48:26.320447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x183c00 00:26:18.967 [2024-07-24 10:48:26.320472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x183c00 00:26:18.967 [2024-07-24 10:48:26.320502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x183c00 00:26:18.967 [2024-07-24 10:48:26.320526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x183c00 00:26:18.967 [2024-07-24 10:48:26.320550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x183c00 00:26:18.967 [2024-07-24 10:48:26.320572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x183c00 00:26:18.967 [2024-07-24 10:48:26.320597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x183c00 00:26:18.967 [2024-07-24 10:48:26.320620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x183c00 00:26:18.967 [2024-07-24 10:48:26.320644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x183c00 00:26:18.967 [2024-07-24 10:48:26.320669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x183d00 00:26:18.967 [2024-07-24 10:48:26.320692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x183d00 00:26:18.967 [2024-07-24 10:48:26.320715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x183d00 00:26:18.967 [2024-07-24 10:48:26.320741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x183d00 00:26:18.967 [2024-07-24 10:48:26.320765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x183d00 00:26:18.967 [2024-07-24 10:48:26.320788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x183d00 00:26:18.967 [2024-07-24 10:48:26.320812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.967 [2024-07-24 10:48:26.320825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.320834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.320848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.320859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.320871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.320881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.320894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.320905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.320918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.320930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.320944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.320954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.320967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.320978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.320992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.321005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.321018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.321028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.321042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.321054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.321067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.321078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.321092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.321102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.321116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.321127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.321140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.321151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.321165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.321176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.321190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.321202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.321216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.321226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.321239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.321249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.321263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.321273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.321287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.321297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.321310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.321320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.321333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.321345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.321359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.321370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.321382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.321393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.321406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x183d00 00:26:18.968 [2024-07-24 10:48:26.321416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.321429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x184000 00:26:18.968 [2024-07-24 10:48:26.321440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.321453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f700 len:0x10000 key:0x183e00 00:26:18.968 [2024-07-24 10:48:26.321464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:d91a p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.323810] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806440 was disconnected and freed. reset controller. 00:26:18.968 [2024-07-24 10:48:26.323861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4cfd00 len:0x10000 key:0x184000 00:26:18.968 [2024-07-24 10:48:26.323885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.323918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfc80 len:0x10000 key:0x184000 00:26:18.968 [2024-07-24 10:48:26.323941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.323967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x184000 00:26:18.968 [2024-07-24 10:48:26.323990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.324016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x184000 00:26:18.968 [2024-07-24 10:48:26.324038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.324066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x184000 00:26:18.968 [2024-07-24 10:48:26.324088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.968 [2024-07-24 10:48:26.324116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x184000 00:26:18.969 [2024-07-24 10:48:26.324137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.324165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x184000 00:26:18.969 [2024-07-24 10:48:26.324187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.324214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x184000 00:26:18.969 [2024-07-24 10:48:26.324235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.324263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x184000 00:26:18.969 [2024-07-24 10:48:26.324284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.324312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f880 len:0x10000 key:0x184000 00:26:18.969 [2024-07-24 10:48:26.324334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.324360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x184000 00:26:18.969 [2024-07-24 10:48:26.324389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.324416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x184000 00:26:18.969 [2024-07-24 10:48:26.324438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.324465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x184000 00:26:18.969 [2024-07-24 10:48:26.324501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.324529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.324552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.324579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.324602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.324628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.324651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.324679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.324701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.324728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.324750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.324777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.324799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.324826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.324848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.324876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.324903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.324917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.324929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.324942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.324953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.324966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.324977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.324989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.325000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.325013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.325025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.325039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.325050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.325063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.325073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.325087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.325098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.325111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.325122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.325135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.325146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.325159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.325170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.325184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.325201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.325214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.325226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.325239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.325250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.325264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.325275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.325289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.325299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.325313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.325323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.325336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183700 00:26:18.969 [2024-07-24 10:48:26.325347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.969 [2024-07-24 10:48:26.325360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183700 00:26:18.970 [2024-07-24 10:48:26.325371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.325384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x183700 00:26:18.970 [2024-07-24 10:48:26.325394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.325408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x183700 00:26:18.970 [2024-07-24 10:48:26.325419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.325432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x183700 00:26:18.970 [2024-07-24 10:48:26.325442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.325455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183700 00:26:18.970 [2024-07-24 10:48:26.325467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.325484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x184500 00:26:18.970 [2024-07-24 10:48:26.325518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.325532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x184500 00:26:18.970 [2024-07-24 10:48:26.325544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.325557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x184500 00:26:18.970 [2024-07-24 10:48:26.325568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.325582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x184500 00:26:18.970 [2024-07-24 10:48:26.325593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.325606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x184500 00:26:18.970 [2024-07-24 10:48:26.325617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.325630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x184500 00:26:18.970 [2024-07-24 10:48:26.325641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.325654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x184500 00:26:18.970 [2024-07-24 10:48:26.325665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.325679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x184500 00:26:18.970 [2024-07-24 10:48:26.325690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.325702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x184500 00:26:18.970 [2024-07-24 10:48:26.325713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.325726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x184500 00:26:18.970 [2024-07-24 10:48:26.325738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.325750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x184500 00:26:18.970 [2024-07-24 10:48:26.325761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.325778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x184500 00:26:18.970 [2024-07-24 10:48:26.325789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.332281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x184500 00:26:18.970 [2024-07-24 10:48:26.332341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.332376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x184500 00:26:18.970 [2024-07-24 10:48:26.332399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.332428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x184500 00:26:18.970 [2024-07-24 10:48:26.332450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.332479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x184500 00:26:18.970 [2024-07-24 10:48:26.332522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.332550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x184500 00:26:18.970 [2024-07-24 10:48:26.332583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.332598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x184500 00:26:18.970 [2024-07-24 10:48:26.332608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.332622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8cf700 len:0x10000 key:0x184500 00:26:18.970 [2024-07-24 10:48:26.332632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.332648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x184000 00:26:18.970 [2024-07-24 10:48:26.332658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56270 cdw0:740a0000 sqhd:6446 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.335985] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8061c0 was disconnected and freed. reset controller. 00:26:18.970 [2024-07-24 10:48:26.336083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.970 [2024-07-24 10:48:26.336103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:2498 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.336117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.970 [2024-07-24 10:48:26.336128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:2498 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.336144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.970 [2024-07-24 10:48:26.336155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:2498 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.336167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.970 [2024-07-24 10:48:26.336177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:2498 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.338692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:18.970 [2024-07-24 10:48:26.338714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:18.970 [2024-07-24 10:48:26.338726] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:18.970 [2024-07-24 10:48:26.338748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.970 [2024-07-24 10:48:26.338760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:59f8 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.338772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.970 [2024-07-24 10:48:26.338782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:59f8 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.338794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.970 [2024-07-24 10:48:26.338805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:59f8 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.338816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.970 [2024-07-24 10:48:26.338826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:59f8 p:1 m:0 dnr:0 00:26:18.970 [2024-07-24 10:48:26.340925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:18.971 [2024-07-24 10:48:26.340976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:18.971 [2024-07-24 10:48:26.340998] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:18.971 [2024-07-24 10:48:26.341048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.341074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:ce5a p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.341097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.341119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:ce5a p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.341142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.341163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:ce5a p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.341186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.341207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:ce5a p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.343396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:18.971 [2024-07-24 10:48:26.343430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:18.971 [2024-07-24 10:48:26.343450] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:18.971 [2024-07-24 10:48:26.343513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.343538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:e1d6 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.343560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.343583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:e1d6 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.343606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.343628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:e1d6 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.343650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.343671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:e1d6 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.345818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:18.971 [2024-07-24 10:48:26.345849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:18.971 [2024-07-24 10:48:26.345868] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:18.971 [2024-07-24 10:48:26.345914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.345927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:74a0 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.345939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.345950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:74a0 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.345961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.345972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:74a0 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.345983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.345993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:74a0 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.348057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:18.971 [2024-07-24 10:48:26.348089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.971 [2024-07-24 10:48:26.348108] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:18.971 [2024-07-24 10:48:26.348143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.348172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:9ec6 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.348195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.348215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:9ec6 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.348238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.348259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:9ec6 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.348283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.348304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:9ec6 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.350418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:18.971 [2024-07-24 10:48:26.350451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:18.971 [2024-07-24 10:48:26.350470] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:18.971 [2024-07-24 10:48:26.350519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.350544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:9258 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.350567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.350587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:9258 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.350611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.350643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:9258 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.350665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.350687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:9258 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.352743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:18.971 [2024-07-24 10:48:26.352776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:18.971 [2024-07-24 10:48:26.352794] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:18.971 [2024-07-24 10:48:26.352834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.352846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:a422 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.352857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.352868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:a422 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.352879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.352893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:a422 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.352904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.352915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:a422 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.354809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:18.971 [2024-07-24 10:48:26.354841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:18.971 [2024-07-24 10:48:26.354860] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:18.971 [2024-07-24 10:48:26.354896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.354920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:8218 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.354943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.354965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:8218 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.354989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.355010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:8218 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.355033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.971 [2024-07-24 10:48:26.355054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:8218 p:1 m:0 dnr:0 00:26:18.971 [2024-07-24 10:48:26.356989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:18.971 [2024-07-24 10:48:26.357022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:26:18.972 [2024-07-24 10:48:26.357040] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:18.972 [2024-07-24 10:48:26.357075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.972 [2024-07-24 10:48:26.357097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:22d6 p:1 m:0 dnr:0 00:26:18.972 [2024-07-24 10:48:26.357120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.972 [2024-07-24 10:48:26.357140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:22d6 p:1 m:0 dnr:0 00:26:18.972 [2024-07-24 10:48:26.357162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.972 [2024-07-24 10:48:26.357182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:22d6 p:1 m:0 dnr:0 00:26:18.972 [2024-07-24 10:48:26.357205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.972 [2024-07-24 10:48:26.357225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56270 cdw0:0 sqhd:22d6 p:1 m:0 dnr:0 00:26:18.972 [2024-07-24 10:48:26.377940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:18.972 [2024-07-24 10:48:26.377989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:18.972 [2024-07-24 10:48:26.378010] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:18.972 [2024-07-24 10:48:26.386862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.972 [2024-07-24 10:48:26.386887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:18.972 [2024-07-24 10:48:26.386895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:18.972 [2024-07-24 10:48:26.386929] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:18.972 [2024-07-24 10:48:26.386940] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:18.972 [2024-07-24 10:48:26.386949] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:18.972 [2024-07-24 10:48:26.386963] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:18.972 [2024-07-24 10:48:26.386973] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:18.972 [2024-07-24 10:48:26.386981] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:18.972 [2024-07-24 10:48:26.386991] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:18.972 [2024-07-24 10:48:26.387067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:18.972 [2024-07-24 10:48:26.387076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:18.972 [2024-07-24 10:48:26.387085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:18.972 [2024-07-24 10:48:26.387095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:26:18.972 [2024-07-24 10:48:26.389172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:18.972 task offset: 40960 on job bdev=Nvme6n1 fails 00:26:18.972 00:26:18.972 Latency(us) 00:26:18.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.972 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:18.972 Job: Nvme1n1 ended in about 1.90 seconds with error 00:26:18.972 Verification LBA range: start 0x0 length 0x400 00:26:18.972 Nvme1n1 : 1.90 134.42 8.40 33.60 0.00 378769.99 39945.75 1078535.31 00:26:18.972 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:18.972 Job: Nvme2n1 ended in about 1.91 seconds with error 00:26:18.972 Verification LBA range: start 0x0 length 0x400 00:26:18.972 Nvme2n1 : 1.91 147.48 9.22 33.59 0.00 348489.33 4868.39 1078535.31 00:26:18.972 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:18.972 Job: Nvme3n1 ended in about 1.91 seconds with error 00:26:18.972 Verification LBA range: start 0x0 length 0x400 00:26:18.972 Nvme3n1 : 1.91 147.93 9.25 33.57 0.00 344712.32 12046.14 1078535.31 00:26:18.972 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:18.972 Job: Nvme4n1 ended in about 1.91 seconds with error 00:26:18.972 Verification LBA range: start 0x0 length 0x400 00:26:18.972 Nvme4n1 : 1.91 146.29 9.14 33.56 0.00 344972.43 5118.05 1078535.31 00:26:18.972 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:18.972 Job: Nvme5n1 ended in about 1.91 seconds with error 00:26:18.972 Verification LBA range: start 0x0 length 0x400 00:26:18.972 Nvme5n1 : 1.91 142.56 8.91 33.54 0.00 349320.63 19723.22 1078535.31 00:26:18.972 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:18.972 Job: Nvme6n1 ended in about 1.91 seconds with error 00:26:18.972 Verification LBA range: start 0x0 length 0x400 00:26:18.972 Nvme6n1 : 1.91 145.13 9.07 33.53 0.00 341030.82 25590.25 1070546.16 00:26:18.972 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:18.972 Job: Nvme7n1 ended in about 1.91 seconds with error 00:26:18.972 Verification LBA range: start 0x0 length 0x400 00:26:18.972 Nvme7n1 : 1.91 134.07 8.38 33.52 0.00 360952.59 31332.45 1158426.82 00:26:18.972 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:18.972 Job: Nvme8n1 ended in about 1.91 seconds with error 00:26:18.972 Verification LBA range: start 0x0 length 0x400 00:26:18.972 Nvme8n1 : 1.91 134.01 8.38 33.50 0.00 358305.99 33953.89 1142448.52 00:26:18.972 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:18.972 Job: Nvme9n1 ended in about 1.91 seconds with error 00:26:18.972 Verification LBA range: start 0x0 length 0x400 00:26:18.972 Nvme9n1 : 1.91 133.95 8.37 33.49 0.00 355120.47 51679.82 1134459.37 00:26:18.972 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:18.972 Job: Nvme10n1 ended in about 1.91 seconds with error 00:26:18.972 Verification LBA range: start 0x0 length 0x400 00:26:18.972 Nvme10n1 : 1.91 100.42 6.28 33.47 0.00 440091.31 51929.48 1118481.07 00:26:18.972 =================================================================================================================== 00:26:18.972 Total : 1366.26 85.39 335.38 0.00 360073.65 4868.39 1158426.82 00:26:18.972 [2024-07-24 10:48:26.408987] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:18.972 [2024-07-24 10:48:26.409007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:26:18.972 [2024-07-24 10:48:26.409019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:19.230 [2024-07-24 10:48:26.417738] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:19.230 [2024-07-24 10:48:26.417796] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:19.230 [2024-07-24 10:48:26.417806] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:26:19.230 [2024-07-24 10:48:26.417882] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:19.230 [2024-07-24 10:48:26.417896] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:19.230 [2024-07-24 10:48:26.417903] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:26:19.230 [2024-07-24 10:48:26.417983] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:19.230 [2024-07-24 10:48:26.417995] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:19.230 [2024-07-24 10:48:26.418003] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:26:19.231 [2024-07-24 10:48:26.421416] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:19.231 [2024-07-24 10:48:26.421456] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:19.231 [2024-07-24 10:48:26.421473] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d2900 00:26:19.231 [2024-07-24 10:48:26.421593] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:19.231 [2024-07-24 10:48:26.421618] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:19.231 [2024-07-24 10:48:26.421634] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6340 00:26:19.231 [2024-07-24 10:48:26.421764] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:19.231 [2024-07-24 10:48:26.421789] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:19.231 [2024-07-24 10:48:26.421805] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c60c0 00:26:19.231 [2024-07-24 10:48:26.421893] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:19.231 [2024-07-24 10:48:26.421917] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:19.231 [2024-07-24 10:48:26.421934] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8500 00:26:19.231 [2024-07-24 10:48:26.422766] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:19.231 [2024-07-24 10:48:26.422797] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:19.231 [2024-07-24 10:48:26.422813] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928d5c0 00:26:19.231 [2024-07-24 10:48:26.422938] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:19.231 [2024-07-24 10:48:26.422950] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:19.231 [2024-07-24 10:48:26.422959] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928ee00 00:26:19.231 [2024-07-24 10:48:26.423052] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:19.231 [2024-07-24 10:48:26.423064] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:19.231 [2024-07-24 10:48:26.423072] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928dd80 00:26:19.489 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2325665 00:26:19.489 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:26:19.489 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:19.489 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:19.489 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:19.489 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:19.489 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:19.490 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:26:19.490 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:19.490 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:19.490 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:26:19.490 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:19.490 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:19.490 rmmod nvme_rdma 00:26:19.490 rmmod nvme_fabrics 00:26:19.490 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 2325665 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:26:19.490 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:19.490 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:26:19.490 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:26:19.490 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:19.490 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:19.490 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:19.490 00:26:19.490 real 0m4.409s 00:26:19.490 user 0m14.607s 00:26:19.490 sys 0m1.022s 00:26:19.490 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:19.490 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:19.490 ************************************ 00:26:19.490 END TEST nvmf_shutdown_tc3 00:26:19.490 ************************************ 00:26:19.490 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:26:19.490 00:26:19.490 real 0m20.938s 00:26:19.490 user 1m1.428s 00:26:19.490 sys 0m7.169s 00:26:19.490 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:19.490 10:48:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:19.490 ************************************ 00:26:19.490 END TEST nvmf_shutdown 00:26:19.490 ************************************ 00:26:19.490 10:48:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:26:19.490 00:26:19.490 real 16m8.988s 00:26:19.490 user 48m58.941s 00:26:19.490 sys 2m28.138s 00:26:19.490 10:48:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:19.490 10:48:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:19.490 ************************************ 00:26:19.490 END TEST nvmf_target_extra 00:26:19.490 ************************************ 00:26:19.490 10:48:26 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:26:19.490 10:48:26 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:19.490 10:48:26 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:19.490 10:48:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:19.490 ************************************ 00:26:19.490 START TEST nvmf_host 00:26:19.490 ************************************ 00:26:19.490 10:48:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:26:19.749 * Looking for test storage... 00:26:19.749 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:19.749 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:19.750 10:48:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:19.750 10:48:26 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:19.750 10:48:26 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:26:19.750 10:48:26 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:26:19.750 10:48:26 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:26:19.750 10:48:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:19.750 10:48:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:19.750 10:48:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.750 ************************************ 00:26:19.750 START TEST nvmf_multicontroller 00:26:19.750 ************************************ 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:26:19.750 * Looking for test storage... 00:26:19.750 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:26:19.750 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:26:19.750 00:26:19.750 real 0m0.114s 00:26:19.750 user 0m0.059s 00:26:19.750 sys 0m0.062s 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:19.750 ************************************ 00:26:19.750 END TEST nvmf_multicontroller 00:26:19.750 ************************************ 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.750 ************************************ 00:26:19.750 START TEST nvmf_aer 00:26:19.750 ************************************ 00:26:19.750 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:26:20.009 * Looking for test storage... 00:26:20.009 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:26:20.009 10:48:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:25.280 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:25.280 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:26:25.280 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:25.280 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:25.280 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:25.280 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:25.280 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:25.280 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:26:25.280 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:25.280 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:26:25.280 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:26:25.281 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:26:25.281 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:26:25.281 Found net devices under 0000:da:00.0: mlx_0_0 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:26:25.281 Found net devices under 0000:da:00.1: mlx_0_1 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # rdma_device_init 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # uname 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:25.281 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:25.281 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:26:25.281 altname enp218s0f0np0 00:26:25.281 altname ens818f0np0 00:26:25.281 inet 192.168.100.8/24 scope global mlx_0_0 00:26:25.281 valid_lft forever preferred_lft forever 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:25.281 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:25.281 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:26:25.281 altname enp218s0f1np1 00:26:25.281 altname ens818f1np1 00:26:25.281 inet 192.168.100.9/24 scope global mlx_0_1 00:26:25.281 valid_lft forever preferred_lft forever 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:26:25.281 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:25.282 192.168.100.9' 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:25.282 192.168.100.9' 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@457 -- # head -n 1 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:25.282 192.168.100.9' 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # tail -n +2 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # head -n 1 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2329499 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2329499 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 2329499 ']' 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:25.282 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:25.282 [2024-07-24 10:48:32.691038] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:26:25.282 [2024-07-24 10:48:32.691093] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.282 EAL: No free 2048 kB hugepages reported on node 1 00:26:25.541 [2024-07-24 10:48:32.748702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:25.541 [2024-07-24 10:48:32.793207] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.541 [2024-07-24 10:48:32.793250] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.541 [2024-07-24 10:48:32.793257] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.541 [2024-07-24 10:48:32.793263] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.541 [2024-07-24 10:48:32.793269] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.541 [2024-07-24 10:48:32.793313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.541 [2024-07-24 10:48:32.793440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:25.541 [2024-07-24 10:48:32.793513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:25.541 [2024-07-24 10:48:32.793515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.541 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:25.541 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:26:25.541 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:25.541 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:25.541 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:25.541 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.541 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:25.541 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.541 10:48:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:25.541 [2024-07-24 10:48:32.961994] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15e46a0/0x15e8b70) succeed. 00:26:25.541 [2024-07-24 10:48:32.971138] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15e5c90/0x162a200) succeed. 00:26:25.800 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.800 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:25.800 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.800 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:25.800 Malloc0 00:26:25.800 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.800 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:25.800 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.800 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:25.800 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.800 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:25.800 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.800 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:25.800 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.800 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:25.800 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.800 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:25.800 [2024-07-24 10:48:33.135103] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:25.800 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.800 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:25.800 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.800 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:25.800 [ 00:26:25.800 { 00:26:25.801 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:25.801 "subtype": "Discovery", 00:26:25.801 "listen_addresses": [], 00:26:25.801 "allow_any_host": true, 00:26:25.801 "hosts": [] 00:26:25.801 }, 00:26:25.801 { 00:26:25.801 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:25.801 "subtype": "NVMe", 00:26:25.801 "listen_addresses": [ 00:26:25.801 { 00:26:25.801 "trtype": "RDMA", 00:26:25.801 "adrfam": "IPv4", 00:26:25.801 "traddr": "192.168.100.8", 00:26:25.801 "trsvcid": "4420" 00:26:25.801 } 00:26:25.801 ], 00:26:25.801 "allow_any_host": true, 00:26:25.801 "hosts": [], 00:26:25.801 "serial_number": "SPDK00000000000001", 00:26:25.801 "model_number": "SPDK bdev Controller", 00:26:25.801 "max_namespaces": 2, 00:26:25.801 "min_cntlid": 1, 00:26:25.801 "max_cntlid": 65519, 00:26:25.801 "namespaces": [ 00:26:25.801 { 00:26:25.801 "nsid": 1, 00:26:25.801 "bdev_name": "Malloc0", 00:26:25.801 "name": "Malloc0", 00:26:25.801 "nguid": "A17C3426065E4B358D36E4504323BBF4", 00:26:25.801 "uuid": "a17c3426-065e-4b35-8d36-e4504323bbf4" 00:26:25.801 } 00:26:25.801 ] 00:26:25.801 } 00:26:25.801 ] 00:26:25.801 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.801 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:25.801 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:25.801 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2329569 00:26:25.801 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:25.801 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:25.801 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:26:25.801 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:25.801 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:26:25.801 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:26:25.801 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:26:25.801 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.059 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:26.059 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:26:26.059 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:26.060 Malloc1 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:26.060 [ 00:26:26.060 { 00:26:26.060 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:26.060 "subtype": "Discovery", 00:26:26.060 "listen_addresses": [], 00:26:26.060 "allow_any_host": true, 00:26:26.060 "hosts": [] 00:26:26.060 }, 00:26:26.060 { 00:26:26.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:26.060 "subtype": "NVMe", 00:26:26.060 "listen_addresses": [ 00:26:26.060 { 00:26:26.060 "trtype": "RDMA", 00:26:26.060 "adrfam": "IPv4", 00:26:26.060 "traddr": "192.168.100.8", 00:26:26.060 "trsvcid": "4420" 00:26:26.060 } 00:26:26.060 ], 00:26:26.060 "allow_any_host": true, 00:26:26.060 "hosts": [], 00:26:26.060 "serial_number": "SPDK00000000000001", 00:26:26.060 "model_number": "SPDK bdev Controller", 00:26:26.060 "max_namespaces": 2, 00:26:26.060 "min_cntlid": 1, 00:26:26.060 "max_cntlid": 65519, 00:26:26.060 "namespaces": [ 00:26:26.060 { 00:26:26.060 "nsid": 1, 00:26:26.060 "bdev_name": "Malloc0", 00:26:26.060 "name": "Malloc0", 00:26:26.060 "nguid": "A17C3426065E4B358D36E4504323BBF4", 00:26:26.060 "uuid": "a17c3426-065e-4b35-8d36-e4504323bbf4" 00:26:26.060 }, 00:26:26.060 { 00:26:26.060 "nsid": 2, 00:26:26.060 "bdev_name": "Malloc1", 00:26:26.060 "name": "Malloc1", 00:26:26.060 "nguid": "BF7234E98140487496A541232C29AB39", 00:26:26.060 "uuid": "bf7234e9-8140-4874-96a5-41232c29ab39" 00:26:26.060 } 00:26:26.060 ] 00:26:26.060 } 00:26:26.060 ] 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2329569 00:26:26.060 Asynchronous Event Request test 00:26:26.060 Attaching to 192.168.100.8 00:26:26.060 Attached to 192.168.100.8 00:26:26.060 Registering asynchronous event callbacks... 00:26:26.060 Starting namespace attribute notice tests for all controllers... 00:26:26.060 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:26.060 aer_cb - Changed Namespace 00:26:26.060 Cleaning up... 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.060 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:26.319 rmmod nvme_rdma 00:26:26.319 rmmod nvme_fabrics 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2329499 ']' 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2329499 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 2329499 ']' 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 2329499 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2329499 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2329499' 00:26:26.319 killing process with pid 2329499 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 2329499 00:26:26.319 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 2329499 00:26:26.578 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:26.578 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:26.578 00:26:26.578 real 0m6.670s 00:26:26.578 user 0m5.568s 00:26:26.578 sys 0m4.443s 00:26:26.578 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:26.578 10:48:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:26.578 ************************************ 00:26:26.578 END TEST nvmf_aer 00:26:26.578 ************************************ 00:26:26.578 10:48:33 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:26:26.578 10:48:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:26.578 10:48:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:26.578 10:48:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.578 ************************************ 00:26:26.578 START TEST nvmf_async_init 00:26:26.578 ************************************ 00:26:26.578 10:48:33 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:26:26.578 * Looking for test storage... 00:26:26.578 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:26.578 10:48:33 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:26.578 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:26:26.578 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:26.578 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:26.578 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:26.578 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:26.578 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:26.578 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:26.578 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:26.578 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:26.578 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:26.578 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:26.578 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:26:26.578 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:26:26.578 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:26.578 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:26.578 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:26.578 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:26.578 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:26.578 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.578 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.578 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:26.579 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.579 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.579 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.579 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:26:26.579 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.579 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:26:26.579 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:26.579 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:26.579 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:26.579 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:26.579 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:26.579 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:26.579 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:26.579 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:26.579 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:26.579 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:26:26.579 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:26:26.579 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:26.579 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:26:26.579 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:26:26.838 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=1ed9214d369b4692bb88a9204d7816f5 00:26:26.838 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:26:26.838 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:26:26.838 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:26.838 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:26.838 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:26.838 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:26.838 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.838 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:26.838 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.838 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:26.838 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:26.838 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:26:26.838 10:48:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:26:32.104 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:26:32.104 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:26:32.104 Found net devices under 0000:da:00.0: mlx_0_0 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:26:32.104 Found net devices under 0000:da:00.1: mlx_0_1 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # rdma_device_init 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # uname 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:32.104 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:32.105 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:32.105 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:26:32.105 altname enp218s0f0np0 00:26:32.105 altname ens818f0np0 00:26:32.105 inet 192.168.100.8/24 scope global mlx_0_0 00:26:32.105 valid_lft forever preferred_lft forever 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:32.105 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:32.105 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:26:32.105 altname enp218s0f1np1 00:26:32.105 altname ens818f1np1 00:26:32.105 inet 192.168.100.9/24 scope global mlx_0_1 00:26:32.105 valid_lft forever preferred_lft forever 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:32.105 192.168.100.9' 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:32.105 192.168.100.9' 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@457 -- # head -n 1 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:32.105 192.168.100.9' 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # tail -n +2 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # head -n 1 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2332633 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2332633 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 2332633 ']' 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:32.105 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:32.105 [2024-07-24 10:48:39.403471] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:26:32.105 [2024-07-24 10:48:39.403518] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.105 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.105 [2024-07-24 10:48:39.457877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.105 [2024-07-24 10:48:39.498350] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.105 [2024-07-24 10:48:39.498390] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.105 [2024-07-24 10:48:39.498396] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.105 [2024-07-24 10:48:39.498402] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.105 [2024-07-24 10:48:39.498407] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.105 [2024-07-24 10:48:39.498424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.363 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:32.363 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:26:32.363 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:32.363 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:32.363 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:32.363 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.363 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:26:32.363 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.363 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:32.363 [2024-07-24 10:48:39.645733] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x752240/0x7566f0) succeed. 00:26:32.363 [2024-07-24 10:48:39.654433] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7536f0/0x797d80) succeed. 00:26:32.363 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.363 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:32.363 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.363 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:32.363 null0 00:26:32.363 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.363 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:32.363 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.363 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:32.363 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.364 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:32.364 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.364 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:32.364 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.364 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 1ed9214d369b4692bb88a9204d7816f5 00:26:32.364 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.364 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:32.364 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.364 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:26:32.364 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.364 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:32.364 [2024-07-24 10:48:39.737732] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:32.364 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.364 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:32.364 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.364 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:32.622 nvme0n1 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:32.622 [ 00:26:32.622 { 00:26:32.622 "name": "nvme0n1", 00:26:32.622 "aliases": [ 00:26:32.622 "1ed9214d-369b-4692-bb88-a9204d7816f5" 00:26:32.622 ], 00:26:32.622 "product_name": "NVMe disk", 00:26:32.622 "block_size": 512, 00:26:32.622 "num_blocks": 2097152, 00:26:32.622 "uuid": "1ed9214d-369b-4692-bb88-a9204d7816f5", 00:26:32.622 "assigned_rate_limits": { 00:26:32.622 "rw_ios_per_sec": 0, 00:26:32.622 "rw_mbytes_per_sec": 0, 00:26:32.622 "r_mbytes_per_sec": 0, 00:26:32.622 "w_mbytes_per_sec": 0 00:26:32.622 }, 00:26:32.622 "claimed": false, 00:26:32.622 "zoned": false, 00:26:32.622 "supported_io_types": { 00:26:32.622 "read": true, 00:26:32.622 "write": true, 00:26:32.622 "unmap": false, 00:26:32.622 "flush": true, 00:26:32.622 "reset": true, 00:26:32.622 "nvme_admin": true, 00:26:32.622 "nvme_io": true, 00:26:32.622 "nvme_io_md": false, 00:26:32.622 "write_zeroes": true, 00:26:32.622 "zcopy": false, 00:26:32.622 "get_zone_info": false, 00:26:32.622 "zone_management": false, 00:26:32.622 "zone_append": false, 00:26:32.622 "compare": true, 00:26:32.622 "compare_and_write": true, 00:26:32.622 "abort": true, 00:26:32.622 "seek_hole": false, 00:26:32.622 "seek_data": false, 00:26:32.622 "copy": true, 00:26:32.622 "nvme_iov_md": false 00:26:32.622 }, 00:26:32.622 "memory_domains": [ 00:26:32.622 { 00:26:32.622 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:26:32.622 "dma_device_type": 0 00:26:32.622 } 00:26:32.622 ], 00:26:32.622 "driver_specific": { 00:26:32.622 "nvme": [ 00:26:32.622 { 00:26:32.622 "trid": { 00:26:32.622 "trtype": "RDMA", 00:26:32.622 "adrfam": "IPv4", 00:26:32.622 "traddr": "192.168.100.8", 00:26:32.622 "trsvcid": "4420", 00:26:32.622 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:32.622 }, 00:26:32.622 "ctrlr_data": { 00:26:32.622 "cntlid": 1, 00:26:32.622 "vendor_id": "0x8086", 00:26:32.622 "model_number": "SPDK bdev Controller", 00:26:32.622 "serial_number": "00000000000000000000", 00:26:32.622 "firmware_revision": "24.09", 00:26:32.622 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:32.622 "oacs": { 00:26:32.622 "security": 0, 00:26:32.622 "format": 0, 00:26:32.622 "firmware": 0, 00:26:32.622 "ns_manage": 0 00:26:32.622 }, 00:26:32.622 "multi_ctrlr": true, 00:26:32.622 "ana_reporting": false 00:26:32.622 }, 00:26:32.622 "vs": { 00:26:32.622 "nvme_version": "1.3" 00:26:32.622 }, 00:26:32.622 "ns_data": { 00:26:32.622 "id": 1, 00:26:32.622 "can_share": true 00:26:32.622 } 00:26:32.622 } 00:26:32.622 ], 00:26:32.622 "mp_policy": "active_passive" 00:26:32.622 } 00:26:32.622 } 00:26:32.622 ] 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:32.622 [2024-07-24 10:48:39.847039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:32.622 [2024-07-24 10:48:39.865433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:32.622 [2024-07-24 10:48:39.886626] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:32.622 [ 00:26:32.622 { 00:26:32.622 "name": "nvme0n1", 00:26:32.622 "aliases": [ 00:26:32.622 "1ed9214d-369b-4692-bb88-a9204d7816f5" 00:26:32.622 ], 00:26:32.622 "product_name": "NVMe disk", 00:26:32.622 "block_size": 512, 00:26:32.622 "num_blocks": 2097152, 00:26:32.622 "uuid": "1ed9214d-369b-4692-bb88-a9204d7816f5", 00:26:32.622 "assigned_rate_limits": { 00:26:32.622 "rw_ios_per_sec": 0, 00:26:32.622 "rw_mbytes_per_sec": 0, 00:26:32.622 "r_mbytes_per_sec": 0, 00:26:32.622 "w_mbytes_per_sec": 0 00:26:32.622 }, 00:26:32.622 "claimed": false, 00:26:32.622 "zoned": false, 00:26:32.622 "supported_io_types": { 00:26:32.622 "read": true, 00:26:32.622 "write": true, 00:26:32.622 "unmap": false, 00:26:32.622 "flush": true, 00:26:32.622 "reset": true, 00:26:32.622 "nvme_admin": true, 00:26:32.622 "nvme_io": true, 00:26:32.622 "nvme_io_md": false, 00:26:32.622 "write_zeroes": true, 00:26:32.622 "zcopy": false, 00:26:32.622 "get_zone_info": false, 00:26:32.622 "zone_management": false, 00:26:32.622 "zone_append": false, 00:26:32.622 "compare": true, 00:26:32.622 "compare_and_write": true, 00:26:32.622 "abort": true, 00:26:32.622 "seek_hole": false, 00:26:32.622 "seek_data": false, 00:26:32.622 "copy": true, 00:26:32.622 "nvme_iov_md": false 00:26:32.622 }, 00:26:32.622 "memory_domains": [ 00:26:32.622 { 00:26:32.622 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:26:32.622 "dma_device_type": 0 00:26:32.622 } 00:26:32.622 ], 00:26:32.622 "driver_specific": { 00:26:32.622 "nvme": [ 00:26:32.622 { 00:26:32.622 "trid": { 00:26:32.622 "trtype": "RDMA", 00:26:32.622 "adrfam": "IPv4", 00:26:32.622 "traddr": "192.168.100.8", 00:26:32.622 "trsvcid": "4420", 00:26:32.622 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:32.622 }, 00:26:32.622 "ctrlr_data": { 00:26:32.622 "cntlid": 2, 00:26:32.622 "vendor_id": "0x8086", 00:26:32.622 "model_number": "SPDK bdev Controller", 00:26:32.622 "serial_number": "00000000000000000000", 00:26:32.622 "firmware_revision": "24.09", 00:26:32.622 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:32.622 "oacs": { 00:26:32.622 "security": 0, 00:26:32.622 "format": 0, 00:26:32.622 "firmware": 0, 00:26:32.622 "ns_manage": 0 00:26:32.622 }, 00:26:32.622 "multi_ctrlr": true, 00:26:32.622 "ana_reporting": false 00:26:32.622 }, 00:26:32.622 "vs": { 00:26:32.622 "nvme_version": "1.3" 00:26:32.622 }, 00:26:32.622 "ns_data": { 00:26:32.622 "id": 1, 00:26:32.622 "can_share": true 00:26:32.622 } 00:26:32.622 } 00:26:32.622 ], 00:26:32.622 "mp_policy": "active_passive" 00:26:32.622 } 00:26:32.622 } 00:26:32.622 ] 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.kDn3EwK8D6 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.kDn3EwK8D6 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:32.622 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.623 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:26:32.623 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.623 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:32.623 [2024-07-24 10:48:39.958055] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:26:32.623 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.623 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kDn3EwK8D6 00:26:32.623 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.623 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:32.623 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.623 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kDn3EwK8D6 00:26:32.623 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.623 10:48:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:32.623 [2024-07-24 10:48:39.978105] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:32.623 nvme0n1 00:26:32.623 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.623 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:32.623 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.623 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:32.623 [ 00:26:32.623 { 00:26:32.623 "name": "nvme0n1", 00:26:32.623 "aliases": [ 00:26:32.623 "1ed9214d-369b-4692-bb88-a9204d7816f5" 00:26:32.623 ], 00:26:32.623 "product_name": "NVMe disk", 00:26:32.623 "block_size": 512, 00:26:32.623 "num_blocks": 2097152, 00:26:32.623 "uuid": "1ed9214d-369b-4692-bb88-a9204d7816f5", 00:26:32.623 "assigned_rate_limits": { 00:26:32.623 "rw_ios_per_sec": 0, 00:26:32.623 "rw_mbytes_per_sec": 0, 00:26:32.623 "r_mbytes_per_sec": 0, 00:26:32.623 "w_mbytes_per_sec": 0 00:26:32.623 }, 00:26:32.623 "claimed": false, 00:26:32.623 "zoned": false, 00:26:32.623 "supported_io_types": { 00:26:32.623 "read": true, 00:26:32.623 "write": true, 00:26:32.623 "unmap": false, 00:26:32.623 "flush": true, 00:26:32.623 "reset": true, 00:26:32.623 "nvme_admin": true, 00:26:32.623 "nvme_io": true, 00:26:32.623 "nvme_io_md": false, 00:26:32.623 "write_zeroes": true, 00:26:32.623 "zcopy": false, 00:26:32.623 "get_zone_info": false, 00:26:32.623 "zone_management": false, 00:26:32.623 "zone_append": false, 00:26:32.623 "compare": true, 00:26:32.623 "compare_and_write": true, 00:26:32.623 "abort": true, 00:26:32.623 "seek_hole": false, 00:26:32.623 "seek_data": false, 00:26:32.623 "copy": true, 00:26:32.623 "nvme_iov_md": false 00:26:32.623 }, 00:26:32.623 "memory_domains": [ 00:26:32.623 { 00:26:32.623 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:26:32.623 "dma_device_type": 0 00:26:32.623 } 00:26:32.623 ], 00:26:32.623 "driver_specific": { 00:26:32.623 "nvme": [ 00:26:32.623 { 00:26:32.623 "trid": { 00:26:32.623 "trtype": "RDMA", 00:26:32.623 "adrfam": "IPv4", 00:26:32.623 "traddr": "192.168.100.8", 00:26:32.623 "trsvcid": "4421", 00:26:32.623 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:32.623 }, 00:26:32.623 "ctrlr_data": { 00:26:32.623 "cntlid": 3, 00:26:32.623 "vendor_id": "0x8086", 00:26:32.623 "model_number": "SPDK bdev Controller", 00:26:32.623 "serial_number": "00000000000000000000", 00:26:32.623 "firmware_revision": "24.09", 00:26:32.623 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:32.623 "oacs": { 00:26:32.623 "security": 0, 00:26:32.623 "format": 0, 00:26:32.623 "firmware": 0, 00:26:32.623 "ns_manage": 0 00:26:32.623 }, 00:26:32.623 "multi_ctrlr": true, 00:26:32.623 "ana_reporting": false 00:26:32.623 }, 00:26:32.623 "vs": { 00:26:32.623 "nvme_version": "1.3" 00:26:32.623 }, 00:26:32.623 "ns_data": { 00:26:32.623 "id": 1, 00:26:32.623 "can_share": true 00:26:32.623 } 00:26:32.623 } 00:26:32.623 ], 00:26:32.623 "mp_policy": "active_passive" 00:26:32.623 } 00:26:32.623 } 00:26:32.623 ] 00:26:32.623 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.623 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.623 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.623 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:32.881 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.881 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.kDn3EwK8D6 00:26:32.881 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:26:32.881 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:26:32.881 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:32.881 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:26:32.881 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:32.881 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:32.881 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:26:32.881 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:32.881 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:32.882 rmmod nvme_rdma 00:26:32.882 rmmod nvme_fabrics 00:26:32.882 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:32.882 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:26:32.882 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:26:32.882 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2332633 ']' 00:26:32.882 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2332633 00:26:32.882 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 2332633 ']' 00:26:32.882 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 2332633 00:26:32.882 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:26:32.882 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:32.882 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2332633 00:26:32.882 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:32.882 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:32.882 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2332633' 00:26:32.882 killing process with pid 2332633 00:26:32.882 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 2332633 00:26:32.882 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 2332633 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:33.141 00:26:33.141 real 0m6.487s 00:26:33.141 user 0m2.553s 00:26:33.141 sys 0m4.373s 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:33.141 ************************************ 00:26:33.141 END TEST nvmf_async_init 00:26:33.141 ************************************ 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.141 ************************************ 00:26:33.141 START TEST dma 00:26:33.141 ************************************ 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:26:33.141 * Looking for test storage... 00:26:33.141 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.141 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:26:33.142 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:33.142 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:33.142 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.142 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.142 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.142 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:33.142 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:33.142 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:33.142 10:48:40 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:26:33.142 10:48:40 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:26:33.142 10:48:40 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:26:33.142 10:48:40 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:26:33.142 10:48:40 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:26:33.142 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:26:33.142 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.400 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:33.400 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:33.400 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:33.400 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.400 10:48:40 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.400 10:48:40 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.400 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:33.400 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:33.400 10:48:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@285 -- # xtrace_disable 00:26:33.400 10:48:40 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:38.706 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@291 -- # pci_devs=() 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@295 -- # net_devs=() 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@296 -- # e810=() 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@296 -- # local -ga e810 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@297 -- # x722=() 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@297 -- # local -ga x722 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@298 -- # mlx=() 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@298 -- # local -ga mlx 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:26:38.707 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:26:38.707 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:26:38.707 Found net devices under 0000:da:00.0: mlx_0_0 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:26:38.707 Found net devices under 0000:da:00.1: mlx_0_1 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # is_hw=yes 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@420 -- # rdma_device_init 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # uname 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:38.707 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:38.707 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:26:38.707 altname enp218s0f0np0 00:26:38.707 altname ens818f0np0 00:26:38.707 inet 192.168.100.8/24 scope global mlx_0_0 00:26:38.707 valid_lft forever preferred_lft forever 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:38.707 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:38.708 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:38.708 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:38.708 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:38.708 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:38.708 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:38.708 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:26:38.708 altname enp218s0f1np1 00:26:38.708 altname ens818f1np1 00:26:38.708 inet 192.168.100.9/24 scope global mlx_0_1 00:26:38.708 valid_lft forever preferred_lft forever 00:26:38.708 10:48:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # return 0 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:38.708 192.168.100.9' 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:38.708 192.168.100.9' 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@457 -- # head -n 1 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # head -n 1 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:38.708 192.168.100.9' 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # tail -n +2 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@481 -- # nvmfpid=2335925 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # waitforlisten 2335925 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@831 -- # '[' -z 2335925 ']' 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:38.708 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:38.708 [2024-07-24 10:48:46.140905] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:26:38.708 [2024-07-24 10:48:46.140948] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:38.967 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.967 [2024-07-24 10:48:46.194244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:38.967 [2024-07-24 10:48:46.234716] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:38.967 [2024-07-24 10:48:46.234754] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:38.967 [2024-07-24 10:48:46.234761] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:38.967 [2024-07-24 10:48:46.234767] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:38.967 [2024-07-24 10:48:46.234772] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:38.967 [2024-07-24 10:48:46.234808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.967 [2024-07-24 10:48:46.234811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.967 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:38.967 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # return 0 00:26:38.967 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:38.967 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:38.967 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:38.967 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:38.967 10:48:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:26:38.967 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.967 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:38.967 [2024-07-24 10:48:46.381659] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdaae60/0xdaf310) succeed. 00:26:38.967 [2024-07-24 10:48:46.390381] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdac310/0xdf09a0) succeed. 00:26:39.225 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.225 10:48:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:26:39.225 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.225 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:39.225 Malloc0 00:26:39.225 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.225 10:48:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:26:39.225 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.225 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:39.225 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.225 10:48:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:26:39.225 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.225 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:39.225 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.225 10:48:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:26:39.226 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.226 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:39.226 [2024-07-24 10:48:46.539205] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:39.226 10:48:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.226 10:48:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:26:39.226 10:48:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:26:39.226 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@532 -- # config=() 00:26:39.226 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@532 -- # local subsystem config 00:26:39.226 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:39.226 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:39.226 { 00:26:39.226 "params": { 00:26:39.226 "name": "Nvme$subsystem", 00:26:39.226 "trtype": "$TEST_TRANSPORT", 00:26:39.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.226 "adrfam": "ipv4", 00:26:39.226 "trsvcid": "$NVMF_PORT", 00:26:39.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.226 "hdgst": ${hdgst:-false}, 00:26:39.226 "ddgst": ${ddgst:-false} 00:26:39.226 }, 00:26:39.226 "method": "bdev_nvme_attach_controller" 00:26:39.226 } 00:26:39.226 EOF 00:26:39.226 )") 00:26:39.226 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@554 -- # cat 00:26:39.226 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@556 -- # jq . 00:26:39.226 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@557 -- # IFS=, 00:26:39.226 10:48:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:39.226 "params": { 00:26:39.226 "name": "Nvme0", 00:26:39.226 "trtype": "rdma", 00:26:39.226 "traddr": "192.168.100.8", 00:26:39.226 "adrfam": "ipv4", 00:26:39.226 "trsvcid": "4420", 00:26:39.226 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:39.226 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:39.226 "hdgst": false, 00:26:39.226 "ddgst": false 00:26:39.226 }, 00:26:39.226 "method": "bdev_nvme_attach_controller" 00:26:39.226 }' 00:26:39.226 [2024-07-24 10:48:46.585426] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:26:39.226 [2024-07-24 10:48:46.585474] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2335952 ] 00:26:39.226 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.226 [2024-07-24 10:48:46.633978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:39.226 [2024-07-24 10:48:46.674256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:39.226 [2024-07-24 10:48:46.674259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:45.784 bdev Nvme0n1 reports 1 memory domains 00:26:45.784 bdev Nvme0n1 supports RDMA memory domain 00:26:45.784 Initialization complete, running randrw IO for 5 sec on 2 cores 00:26:45.784 ========================================================================== 00:26:45.784 Latency [us] 00:26:45.784 IOPS MiB/s Average min max 00:26:45.784 Core 2: 21579.48 84.29 740.76 257.37 8611.65 00:26:45.784 Core 3: 21688.67 84.72 737.00 246.39 8699.76 00:26:45.784 ========================================================================== 00:26:45.784 Total : 43268.15 169.02 738.88 246.39 8699.76 00:26:45.784 00:26:45.784 Total operations: 216368, translate 216368 pull_push 0 memzero 0 00:26:45.784 10:48:52 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:26:45.784 10:48:52 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:26:45.784 10:48:52 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:26:45.784 [2024-07-24 10:48:52.094375] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:26:45.784 [2024-07-24 10:48:52.094424] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2336868 ] 00:26:45.784 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.784 [2024-07-24 10:48:52.142722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:45.784 [2024-07-24 10:48:52.181909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:45.784 [2024-07-24 10:48:52.181912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.047 bdev Malloc0 reports 2 memory domains 00:26:51.047 bdev Malloc0 doesn't support RDMA memory domain 00:26:51.047 Initialization complete, running randrw IO for 5 sec on 2 cores 00:26:51.047 ========================================================================== 00:26:51.047 Latency [us] 00:26:51.047 IOPS MiB/s Average min max 00:26:51.047 Core 2: 14236.54 55.61 1123.07 490.60 1434.17 00:26:51.047 Core 3: 14242.54 55.63 1122.60 466.75 1913.76 00:26:51.047 ========================================================================== 00:26:51.047 Total : 28479.08 111.25 1122.84 466.75 1913.76 00:26:51.047 00:26:51.047 Total operations: 142450, translate 0 pull_push 569800 memzero 0 00:26:51.047 10:48:57 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:26:51.047 10:48:57 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:26:51.047 10:48:57 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:26:51.047 10:48:57 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:26:51.047 Ignoring -M option 00:26:51.048 [2024-07-24 10:48:57.514631] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:26:51.048 [2024-07-24 10:48:57.514685] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2337783 ] 00:26:51.048 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.048 [2024-07-24 10:48:57.562310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:51.048 [2024-07-24 10:48:57.601897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:51.048 [2024-07-24 10:48:57.601900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:56.314 bdev c2add8d2-e0e0-45a7-8149-d12f8057cbb0 reports 1 memory domains 00:26:56.314 bdev c2add8d2-e0e0-45a7-8149-d12f8057cbb0 supports RDMA memory domain 00:26:56.314 Initialization complete, running randread IO for 5 sec on 2 cores 00:26:56.314 ========================================================================== 00:26:56.314 Latency [us] 00:26:56.314 IOPS MiB/s Average min max 00:26:56.314 Core 2: 70137.96 273.98 227.29 80.57 3253.65 00:26:56.314 Core 3: 70845.25 276.74 225.04 84.33 3205.02 00:26:56.314 ========================================================================== 00:26:56.314 Total : 140983.20 550.72 226.16 80.57 3253.65 00:26:56.314 00:26:56.314 Total operations: 705025, translate 0 pull_push 0 memzero 705025 00:26:56.314 10:49:03 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:26:56.314 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.314 [2024-07-24 10:49:03.127239] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:58.213 Initializing NVMe Controllers 00:26:58.213 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:26:58.213 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:26:58.213 Initialization complete. Launching workers. 00:26:58.213 ======================================================== 00:26:58.213 Latency(us) 00:26:58.213 Device Information : IOPS MiB/s Average min max 00:26:58.213 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7972.06 6981.55 7988.69 00:26:58.213 ======================================================== 00:26:58.213 Total : 2016.00 7.88 7972.06 6981.55 7988.69 00:26:58.213 00:26:58.213 10:49:05 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:26:58.213 10:49:05 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:26:58.213 10:49:05 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:26:58.213 10:49:05 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:26:58.213 [2024-07-24 10:49:05.465421] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:26:58.213 [2024-07-24 10:49:05.465461] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2338937 ] 00:26:58.213 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.213 [2024-07-24 10:49:05.513562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:58.213 [2024-07-24 10:49:05.553646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:58.213 [2024-07-24 10:49:05.553650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:04.770 bdev c559f988-ba5b-4345-b019-eea3d0ffea7e reports 1 memory domains 00:27:04.770 bdev c559f988-ba5b-4345-b019-eea3d0ffea7e supports RDMA memory domain 00:27:04.770 Initialization complete, running randrw IO for 5 sec on 2 cores 00:27:04.770 ========================================================================== 00:27:04.770 Latency [us] 00:27:04.770 IOPS MiB/s Average min max 00:27:04.770 Core 2: 18882.38 73.76 846.63 28.31 9664.48 00:27:04.770 Core 3: 19223.91 75.09 831.59 12.33 9819.54 00:27:04.770 ========================================================================== 00:27:04.770 Total : 38106.29 148.85 839.04 12.33 9819.54 00:27:04.770 00:27:04.770 Total operations: 190570, translate 190461 pull_push 0 memzero 109 00:27:04.770 10:49:10 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:04.770 10:49:10 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:27:04.770 10:49:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:04.770 10:49:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # sync 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@120 -- # set +e 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:04.770 rmmod nvme_rdma 00:27:04.770 rmmod nvme_fabrics 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set -e 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # return 0 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@489 -- # '[' -n 2335925 ']' 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@490 -- # killprocess 2335925 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@950 -- # '[' -z 2335925 ']' 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # kill -0 2335925 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@955 -- # uname 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2335925 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2335925' 00:27:04.770 killing process with pid 2335925 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@969 -- # kill 2335925 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@974 -- # wait 2335925 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:04.770 00:27:04.770 real 0m30.908s 00:27:04.770 user 1m34.271s 00:27:04.770 sys 0m5.174s 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:04.770 ************************************ 00:27:04.770 END TEST dma 00:27:04.770 ************************************ 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.770 ************************************ 00:27:04.770 START TEST nvmf_identify 00:27:04.770 ************************************ 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:27:04.770 * Looking for test storage... 00:27:04.770 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:04.770 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:27:04.771 10:49:11 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:27:10.037 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:27:10.037 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:10.037 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:27:10.038 Found net devices under 0000:da:00.0: mlx_0_0 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:27:10.038 Found net devices under 0000:da:00.1: mlx_0_1 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # rdma_device_init 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # uname 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:27:10.038 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:10.038 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:27:10.038 altname enp218s0f0np0 00:27:10.038 altname ens818f0np0 00:27:10.038 inet 192.168.100.8/24 scope global mlx_0_0 00:27:10.038 valid_lft forever preferred_lft forever 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:27:10.038 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:10.038 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:27:10.038 altname enp218s0f1np1 00:27:10.038 altname ens818f1np1 00:27:10.038 inet 192.168.100.9/24 scope global mlx_0_1 00:27:10.038 valid_lft forever preferred_lft forever 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:10.038 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:10.039 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:10.039 192.168.100.9' 00:27:10.039 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:10.039 192.168.100.9' 00:27:10.039 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # head -n 1 00:27:10.039 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:10.039 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:10.039 192.168.100.9' 00:27:10.039 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # tail -n +2 00:27:10.039 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # head -n 1 00:27:10.039 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:10.039 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:10.039 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:10.039 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:10.039 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:10.039 10:49:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2342918 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2342918 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 2342918 ']' 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:10.039 [2024-07-24 10:49:17.067420] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:27:10.039 [2024-07-24 10:49:17.067473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.039 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.039 [2024-07-24 10:49:17.129331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:10.039 [2024-07-24 10:49:17.175067] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.039 [2024-07-24 10:49:17.175108] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.039 [2024-07-24 10:49:17.175115] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:10.039 [2024-07-24 10:49:17.175121] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:10.039 [2024-07-24 10:49:17.175126] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.039 [2024-07-24 10:49:17.175161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.039 [2024-07-24 10:49:17.175182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:10.039 [2024-07-24 10:49:17.178507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:10.039 [2024-07-24 10:49:17.178511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:10.039 [2024-07-24 10:49:17.318540] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15ec6a0/0x15f0b70) succeed. 00:27:10.039 [2024-07-24 10:49:17.327664] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15edc90/0x1632200) succeed. 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.039 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:10.301 Malloc0 00:27:10.301 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.302 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:10.302 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.302 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:10.302 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.302 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:10.302 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.302 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:10.302 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.302 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:10.302 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.302 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:10.302 [2024-07-24 10:49:17.526258] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:10.302 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.302 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:10.302 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.302 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:10.302 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.302 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:10.302 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.302 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:10.302 [ 00:27:10.302 { 00:27:10.302 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:10.302 "subtype": "Discovery", 00:27:10.302 "listen_addresses": [ 00:27:10.302 { 00:27:10.302 "trtype": "RDMA", 00:27:10.302 "adrfam": "IPv4", 00:27:10.302 "traddr": "192.168.100.8", 00:27:10.302 "trsvcid": "4420" 00:27:10.302 } 00:27:10.302 ], 00:27:10.302 "allow_any_host": true, 00:27:10.302 "hosts": [] 00:27:10.302 }, 00:27:10.302 { 00:27:10.302 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:10.302 "subtype": "NVMe", 00:27:10.302 "listen_addresses": [ 00:27:10.302 { 00:27:10.302 "trtype": "RDMA", 00:27:10.302 "adrfam": "IPv4", 00:27:10.302 "traddr": "192.168.100.8", 00:27:10.302 "trsvcid": "4420" 00:27:10.302 } 00:27:10.302 ], 00:27:10.302 "allow_any_host": true, 00:27:10.302 "hosts": [], 00:27:10.302 "serial_number": "SPDK00000000000001", 00:27:10.302 "model_number": "SPDK bdev Controller", 00:27:10.302 "max_namespaces": 32, 00:27:10.302 "min_cntlid": 1, 00:27:10.302 "max_cntlid": 65519, 00:27:10.302 "namespaces": [ 00:27:10.302 { 00:27:10.302 "nsid": 1, 00:27:10.302 "bdev_name": "Malloc0", 00:27:10.302 "name": "Malloc0", 00:27:10.302 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:10.302 "eui64": "ABCDEF0123456789", 00:27:10.302 "uuid": "026e2ffb-28ef-4fbf-9abd-536f1cb490a4" 00:27:10.302 } 00:27:10.302 ] 00:27:10.302 } 00:27:10.302 ] 00:27:10.302 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.302 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:10.302 [2024-07-24 10:49:17.576795] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:27:10.302 [2024-07-24 10:49:17.576840] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2343147 ] 00:27:10.302 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.302 [2024-07-24 10:49:17.618692] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:10.302 [2024-07-24 10:49:17.618763] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:27:10.302 [2024-07-24 10:49:17.618775] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:27:10.302 [2024-07-24 10:49:17.618779] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:27:10.302 [2024-07-24 10:49:17.618804] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:10.302 [2024-07-24 10:49:17.628971] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:27:10.302 [2024-07-24 10:49:17.639238] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:27:10.302 [2024-07-24 10:49:17.639248] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:27:10.302 [2024-07-24 10:49:17.639256] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639261] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639265] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639270] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639274] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639278] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639283] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639287] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639291] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639295] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639299] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639303] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639307] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639312] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639316] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639320] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639324] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639328] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639332] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639337] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639341] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639345] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639349] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639353] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639357] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639361] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639365] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639370] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639374] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639378] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639382] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639385] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:27:10.302 [2024-07-24 10:49:17.639392] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:27:10.302 [2024-07-24 10:49:17.639395] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:27:10.302 [2024-07-24 10:49:17.639408] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.639419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x182400 00:27:10.302 [2024-07-24 10:49:17.644497] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.302 [2024-07-24 10:49:17.644506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:10.302 [2024-07-24 10:49:17.644512] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.644517] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:10.302 [2024-07-24 10:49:17.644523] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:10.302 [2024-07-24 10:49:17.644527] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:10.302 [2024-07-24 10:49:17.644539] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.302 [2024-07-24 10:49:17.644547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.302 [2024-07-24 10:49:17.644574] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.303 [2024-07-24 10:49:17.644579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:27:10.303 [2024-07-24 10:49:17.644584] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:10.303 [2024-07-24 10:49:17.644588] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182400 00:27:10.303 [2024-07-24 10:49:17.644592] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:10.303 [2024-07-24 10:49:17.644598] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.303 [2024-07-24 10:49:17.644604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.303 [2024-07-24 10:49:17.644622] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.303 [2024-07-24 10:49:17.644626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:27:10.303 [2024-07-24 10:49:17.644631] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:10.303 [2024-07-24 10:49:17.644634] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182400 00:27:10.303 [2024-07-24 10:49:17.644640] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:10.303 [2024-07-24 10:49:17.644645] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.303 [2024-07-24 10:49:17.644651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.303 [2024-07-24 10:49:17.644672] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.303 [2024-07-24 10:49:17.644676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:10.303 [2024-07-24 10:49:17.644681] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:10.303 [2024-07-24 10:49:17.644686] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182400 00:27:10.303 [2024-07-24 10:49:17.644693] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.303 [2024-07-24 10:49:17.644698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.303 [2024-07-24 10:49:17.644721] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.303 [2024-07-24 10:49:17.644725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:10.303 [2024-07-24 10:49:17.644729] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:10.303 [2024-07-24 10:49:17.644733] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:10.303 [2024-07-24 10:49:17.644737] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182400 00:27:10.303 [2024-07-24 10:49:17.644742] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:10.303 [2024-07-24 10:49:17.644846] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:10.303 [2024-07-24 10:49:17.644850] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:10.303 [2024-07-24 10:49:17.644860] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.303 [2024-07-24 10:49:17.644866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.303 [2024-07-24 10:49:17.644883] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.303 [2024-07-24 10:49:17.644887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:10.303 [2024-07-24 10:49:17.644891] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:10.303 [2024-07-24 10:49:17.644895] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182400 00:27:10.303 [2024-07-24 10:49:17.644901] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.303 [2024-07-24 10:49:17.644907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.303 [2024-07-24 10:49:17.644922] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.303 [2024-07-24 10:49:17.644926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:10.303 [2024-07-24 10:49:17.644930] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:10.303 [2024-07-24 10:49:17.644934] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:10.303 [2024-07-24 10:49:17.644938] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182400 00:27:10.303 [2024-07-24 10:49:17.644942] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:10.303 [2024-07-24 10:49:17.644949] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:10.303 [2024-07-24 10:49:17.644957] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.303 [2024-07-24 10:49:17.644963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182400 00:27:10.303 [2024-07-24 10:49:17.645000] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.303 [2024-07-24 10:49:17.645004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:10.303 [2024-07-24 10:49:17.645011] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:10.303 [2024-07-24 10:49:17.645015] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:10.303 [2024-07-24 10:49:17.645018] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:10.303 [2024-07-24 10:49:17.645024] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:10.303 [2024-07-24 10:49:17.645028] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:10.303 [2024-07-24 10:49:17.645032] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:10.303 [2024-07-24 10:49:17.645036] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182400 00:27:10.303 [2024-07-24 10:49:17.645041] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:10.303 [2024-07-24 10:49:17.645047] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.303 [2024-07-24 10:49:17.645053] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.303 [2024-07-24 10:49:17.645071] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.303 [2024-07-24 10:49:17.645075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:10.303 [2024-07-24 10:49:17.645081] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x182400 00:27:10.303 [2024-07-24 10:49:17.645086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.303 [2024-07-24 10:49:17.645091] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x182400 00:27:10.303 [2024-07-24 10:49:17.645096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.303 [2024-07-24 10:49:17.645101] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.303 [2024-07-24 10:49:17.645106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.303 [2024-07-24 10:49:17.645111] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182400 00:27:10.303 [2024-07-24 10:49:17.645116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.303 [2024-07-24 10:49:17.645120] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:10.303 [2024-07-24 10:49:17.645124] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182400 00:27:10.303 [2024-07-24 10:49:17.645130] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:10.303 [2024-07-24 10:49:17.645139] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.303 [2024-07-24 10:49:17.645145] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.303 [2024-07-24 10:49:17.645162] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.303 [2024-07-24 10:49:17.645166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:27:10.303 [2024-07-24 10:49:17.645171] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:10.303 [2024-07-24 10:49:17.645175] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:10.303 [2024-07-24 10:49:17.645179] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182400 00:27:10.303 [2024-07-24 10:49:17.645187] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.304 [2024-07-24 10:49:17.645192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182400 00:27:10.304 [2024-07-24 10:49:17.645220] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.304 [2024-07-24 10:49:17.645224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:10.304 [2024-07-24 10:49:17.645229] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182400 00:27:10.304 [2024-07-24 10:49:17.645236] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:10.304 [2024-07-24 10:49:17.645255] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.304 [2024-07-24 10:49:17.645261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x182400 00:27:10.304 [2024-07-24 10:49:17.645266] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182400 00:27:10.304 [2024-07-24 10:49:17.645271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.304 [2024-07-24 10:49:17.645292] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.304 [2024-07-24 10:49:17.645296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:10.304 [2024-07-24 10:49:17.645304] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x182400 00:27:10.304 [2024-07-24 10:49:17.645310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x182400 00:27:10.304 [2024-07-24 10:49:17.645314] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182400 00:27:10.304 [2024-07-24 10:49:17.645319] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.304 [2024-07-24 10:49:17.645323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:10.304 [2024-07-24 10:49:17.645327] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182400 00:27:10.304 [2024-07-24 10:49:17.645341] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.304 [2024-07-24 10:49:17.645345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:10.304 [2024-07-24 10:49:17.645352] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182400 00:27:10.304 [2024-07-24 10:49:17.645359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x182400 00:27:10.304 [2024-07-24 10:49:17.645364] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182400 00:27:10.304 [2024-07-24 10:49:17.645382] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.304 [2024-07-24 10:49:17.645386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:10.304 [2024-07-24 10:49:17.645394] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182400 00:27:10.304 ===================================================== 00:27:10.304 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:10.304 ===================================================== 00:27:10.304 Controller Capabilities/Features 00:27:10.304 ================================ 00:27:10.304 Vendor ID: 0000 00:27:10.304 Subsystem Vendor ID: 0000 00:27:10.304 Serial Number: .................... 00:27:10.304 Model Number: ........................................ 00:27:10.304 Firmware Version: 24.09 00:27:10.304 Recommended Arb Burst: 0 00:27:10.304 IEEE OUI Identifier: 00 00 00 00:27:10.304 Multi-path I/O 00:27:10.304 May have multiple subsystem ports: No 00:27:10.304 May have multiple controllers: No 00:27:10.304 Associated with SR-IOV VF: No 00:27:10.304 Max Data Transfer Size: 131072 00:27:10.304 Max Number of Namespaces: 0 00:27:10.304 Max Number of I/O Queues: 1024 00:27:10.304 NVMe Specification Version (VS): 1.3 00:27:10.304 NVMe Specification Version (Identify): 1.3 00:27:10.304 Maximum Queue Entries: 128 00:27:10.304 Contiguous Queues Required: Yes 00:27:10.304 Arbitration Mechanisms Supported 00:27:10.304 Weighted Round Robin: Not Supported 00:27:10.304 Vendor Specific: Not Supported 00:27:10.304 Reset Timeout: 15000 ms 00:27:10.304 Doorbell Stride: 4 bytes 00:27:10.304 NVM Subsystem Reset: Not Supported 00:27:10.304 Command Sets Supported 00:27:10.304 NVM Command Set: Supported 00:27:10.304 Boot Partition: Not Supported 00:27:10.304 Memory Page Size Minimum: 4096 bytes 00:27:10.304 Memory Page Size Maximum: 4096 bytes 00:27:10.304 Persistent Memory Region: Not Supported 00:27:10.304 Optional Asynchronous Events Supported 00:27:10.304 Namespace Attribute Notices: Not Supported 00:27:10.304 Firmware Activation Notices: Not Supported 00:27:10.304 ANA Change Notices: Not Supported 00:27:10.304 PLE Aggregate Log Change Notices: Not Supported 00:27:10.304 LBA Status Info Alert Notices: Not Supported 00:27:10.304 EGE Aggregate Log Change Notices: Not Supported 00:27:10.304 Normal NVM Subsystem Shutdown event: Not Supported 00:27:10.304 Zone Descriptor Change Notices: Not Supported 00:27:10.304 Discovery Log Change Notices: Supported 00:27:10.304 Controller Attributes 00:27:10.304 128-bit Host Identifier: Not Supported 00:27:10.304 Non-Operational Permissive Mode: Not Supported 00:27:10.304 NVM Sets: Not Supported 00:27:10.304 Read Recovery Levels: Not Supported 00:27:10.304 Endurance Groups: Not Supported 00:27:10.304 Predictable Latency Mode: Not Supported 00:27:10.304 Traffic Based Keep ALive: Not Supported 00:27:10.304 Namespace Granularity: Not Supported 00:27:10.304 SQ Associations: Not Supported 00:27:10.304 UUID List: Not Supported 00:27:10.304 Multi-Domain Subsystem: Not Supported 00:27:10.304 Fixed Capacity Management: Not Supported 00:27:10.304 Variable Capacity Management: Not Supported 00:27:10.304 Delete Endurance Group: Not Supported 00:27:10.304 Delete NVM Set: Not Supported 00:27:10.304 Extended LBA Formats Supported: Not Supported 00:27:10.304 Flexible Data Placement Supported: Not Supported 00:27:10.304 00:27:10.304 Controller Memory Buffer Support 00:27:10.304 ================================ 00:27:10.304 Supported: No 00:27:10.304 00:27:10.304 Persistent Memory Region Support 00:27:10.304 ================================ 00:27:10.304 Supported: No 00:27:10.304 00:27:10.304 Admin Command Set Attributes 00:27:10.304 ============================ 00:27:10.304 Security Send/Receive: Not Supported 00:27:10.304 Format NVM: Not Supported 00:27:10.304 Firmware Activate/Download: Not Supported 00:27:10.304 Namespace Management: Not Supported 00:27:10.304 Device Self-Test: Not Supported 00:27:10.304 Directives: Not Supported 00:27:10.304 NVMe-MI: Not Supported 00:27:10.304 Virtualization Management: Not Supported 00:27:10.304 Doorbell Buffer Config: Not Supported 00:27:10.304 Get LBA Status Capability: Not Supported 00:27:10.304 Command & Feature Lockdown Capability: Not Supported 00:27:10.304 Abort Command Limit: 1 00:27:10.304 Async Event Request Limit: 4 00:27:10.304 Number of Firmware Slots: N/A 00:27:10.304 Firmware Slot 1 Read-Only: N/A 00:27:10.304 Firmware Activation Without Reset: N/A 00:27:10.304 Multiple Update Detection Support: N/A 00:27:10.304 Firmware Update Granularity: No Information Provided 00:27:10.304 Per-Namespace SMART Log: No 00:27:10.304 Asymmetric Namespace Access Log Page: Not Supported 00:27:10.304 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:10.304 Command Effects Log Page: Not Supported 00:27:10.304 Get Log Page Extended Data: Supported 00:27:10.304 Telemetry Log Pages: Not Supported 00:27:10.304 Persistent Event Log Pages: Not Supported 00:27:10.304 Supported Log Pages Log Page: May Support 00:27:10.304 Commands Supported & Effects Log Page: Not Supported 00:27:10.304 Feature Identifiers & Effects Log Page:May Support 00:27:10.304 NVMe-MI Commands & Effects Log Page: May Support 00:27:10.304 Data Area 4 for Telemetry Log: Not Supported 00:27:10.304 Error Log Page Entries Supported: 128 00:27:10.304 Keep Alive: Not Supported 00:27:10.304 00:27:10.304 NVM Command Set Attributes 00:27:10.304 ========================== 00:27:10.304 Submission Queue Entry Size 00:27:10.304 Max: 1 00:27:10.304 Min: 1 00:27:10.304 Completion Queue Entry Size 00:27:10.304 Max: 1 00:27:10.304 Min: 1 00:27:10.304 Number of Namespaces: 0 00:27:10.304 Compare Command: Not Supported 00:27:10.304 Write Uncorrectable Command: Not Supported 00:27:10.304 Dataset Management Command: Not Supported 00:27:10.304 Write Zeroes Command: Not Supported 00:27:10.304 Set Features Save Field: Not Supported 00:27:10.304 Reservations: Not Supported 00:27:10.304 Timestamp: Not Supported 00:27:10.304 Copy: Not Supported 00:27:10.304 Volatile Write Cache: Not Present 00:27:10.304 Atomic Write Unit (Normal): 1 00:27:10.304 Atomic Write Unit (PFail): 1 00:27:10.304 Atomic Compare & Write Unit: 1 00:27:10.304 Fused Compare & Write: Supported 00:27:10.304 Scatter-Gather List 00:27:10.304 SGL Command Set: Supported 00:27:10.304 SGL Keyed: Supported 00:27:10.304 SGL Bit Bucket Descriptor: Not Supported 00:27:10.305 SGL Metadata Pointer: Not Supported 00:27:10.305 Oversized SGL: Not Supported 00:27:10.305 SGL Metadata Address: Not Supported 00:27:10.305 SGL Offset: Supported 00:27:10.305 Transport SGL Data Block: Not Supported 00:27:10.305 Replay Protected Memory Block: Not Supported 00:27:10.305 00:27:10.305 Firmware Slot Information 00:27:10.305 ========================= 00:27:10.305 Active slot: 0 00:27:10.305 00:27:10.305 00:27:10.305 Error Log 00:27:10.305 ========= 00:27:10.305 00:27:10.305 Active Namespaces 00:27:10.305 ================= 00:27:10.305 Discovery Log Page 00:27:10.305 ================== 00:27:10.305 Generation Counter: 2 00:27:10.305 Number of Records: 2 00:27:10.305 Record Format: 0 00:27:10.305 00:27:10.305 Discovery Log Entry 0 00:27:10.305 ---------------------- 00:27:10.305 Transport Type: 1 (RDMA) 00:27:10.305 Address Family: 1 (IPv4) 00:27:10.305 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:10.305 Entry Flags: 00:27:10.305 Duplicate Returned Information: 1 00:27:10.305 Explicit Persistent Connection Support for Discovery: 1 00:27:10.305 Transport Requirements: 00:27:10.305 Secure Channel: Not Required 00:27:10.305 Port ID: 0 (0x0000) 00:27:10.305 Controller ID: 65535 (0xffff) 00:27:10.305 Admin Max SQ Size: 128 00:27:10.305 Transport Service Identifier: 4420 00:27:10.305 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:10.305 Transport Address: 192.168.100.8 00:27:10.305 Transport Specific Address Subtype - RDMA 00:27:10.305 RDMA QP Service Type: 1 (Reliable Connected) 00:27:10.305 RDMA Provider Type: 1 (No provider specified) 00:27:10.305 RDMA CM Service: 1 (RDMA_CM) 00:27:10.305 Discovery Log Entry 1 00:27:10.305 ---------------------- 00:27:10.305 Transport Type: 1 (RDMA) 00:27:10.305 Address Family: 1 (IPv4) 00:27:10.305 Subsystem Type: 2 (NVM Subsystem) 00:27:10.305 Entry Flags: 00:27:10.305 Duplicate Returned Information: 0 00:27:10.305 Explicit Persistent Connection Support for Discovery: 0 00:27:10.305 Transport Requirements: 00:27:10.305 Secure Channel: Not Required 00:27:10.305 Port ID: 0 (0x0000) 00:27:10.305 Controller ID: 65535 (0xffff) 00:27:10.305 Admin Max SQ Size: [2024-07-24 10:49:17.645457] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:10.305 [2024-07-24 10:49:17.645465] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 65151 doesn't match qid 00:27:10.305 [2024-07-24 10:49:17.645476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32703 cdw0:5 sqhd:bf40 p:0 m:0 dnr:0 00:27:10.305 [2024-07-24 10:49:17.645481] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 65151 doesn't match qid 00:27:10.305 [2024-07-24 10:49:17.645487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32703 cdw0:5 sqhd:bf40 p:0 m:0 dnr:0 00:27:10.305 [2024-07-24 10:49:17.645497] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 65151 doesn't match qid 00:27:10.305 [2024-07-24 10:49:17.645502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32703 cdw0:5 sqhd:bf40 p:0 m:0 dnr:0 00:27:10.305 [2024-07-24 10:49:17.645507] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 65151 doesn't match qid 00:27:10.305 [2024-07-24 10:49:17.645512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32703 cdw0:5 sqhd:bf40 p:0 m:0 dnr:0 00:27:10.305 [2024-07-24 10:49:17.645519] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182400 00:27:10.305 [2024-07-24 10:49:17.645525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.305 [2024-07-24 10:49:17.645540] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.305 [2024-07-24 10:49:17.645544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:27:10.305 [2024-07-24 10:49:17.645553] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.305 [2024-07-24 10:49:17.645558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.305 [2024-07-24 10:49:17.645563] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182400 00:27:10.305 [2024-07-24 10:49:17.645582] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.305 [2024-07-24 10:49:17.645587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:10.305 [2024-07-24 10:49:17.645591] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:10.305 [2024-07-24 10:49:17.645595] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:10.305 [2024-07-24 10:49:17.645599] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182400 00:27:10.305 [2024-07-24 10:49:17.645605] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.305 [2024-07-24 10:49:17.645611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.305 [2024-07-24 10:49:17.645635] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.305 [2024-07-24 10:49:17.645642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:27:10.305 [2024-07-24 10:49:17.645647] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182400 00:27:10.305 [2024-07-24 10:49:17.645654] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.305 [2024-07-24 10:49:17.645660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.305 [2024-07-24 10:49:17.645676] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.305 [2024-07-24 10:49:17.645681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:27:10.305 [2024-07-24 10:49:17.645685] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182400 00:27:10.305 [2024-07-24 10:49:17.645692] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.305 [2024-07-24 10:49:17.645698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.305 [2024-07-24 10:49:17.645718] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.305 [2024-07-24 10:49:17.645722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:27:10.305 [2024-07-24 10:49:17.645726] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182400 00:27:10.305 [2024-07-24 10:49:17.645733] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.305 [2024-07-24 10:49:17.645739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.305 [2024-07-24 10:49:17.645758] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.305 [2024-07-24 10:49:17.645762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:27:10.305 [2024-07-24 10:49:17.645767] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182400 00:27:10.305 [2024-07-24 10:49:17.645774] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.305 [2024-07-24 10:49:17.645779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.305 [2024-07-24 10:49:17.645800] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.305 [2024-07-24 10:49:17.645804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:27:10.305 [2024-07-24 10:49:17.645809] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182400 00:27:10.305 [2024-07-24 10:49:17.645815] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.305 [2024-07-24 10:49:17.645821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.305 [2024-07-24 10:49:17.645838] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.305 [2024-07-24 10:49:17.645842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:27:10.305 [2024-07-24 10:49:17.645847] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182400 00:27:10.305 [2024-07-24 10:49:17.645853] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.305 [2024-07-24 10:49:17.645859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.305 [2024-07-24 10:49:17.645877] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.305 [2024-07-24 10:49:17.645882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:27:10.305 [2024-07-24 10:49:17.645886] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182400 00:27:10.305 [2024-07-24 10:49:17.645893] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.305 [2024-07-24 10:49:17.645899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.305 [2024-07-24 10:49:17.645921] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.305 [2024-07-24 10:49:17.645925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:27:10.306 [2024-07-24 10:49:17.645930] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.645936] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.645942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.306 [2024-07-24 10:49:17.645962] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.306 [2024-07-24 10:49:17.645966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:27:10.306 [2024-07-24 10:49:17.645970] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.645977] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.645982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.306 [2024-07-24 10:49:17.646004] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.306 [2024-07-24 10:49:17.646008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:27:10.306 [2024-07-24 10:49:17.646013] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.646019] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.646025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.306 [2024-07-24 10:49:17.646042] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.306 [2024-07-24 10:49:17.646046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:27:10.306 [2024-07-24 10:49:17.646050] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.646057] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.646063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.306 [2024-07-24 10:49:17.646083] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.306 [2024-07-24 10:49:17.646087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:27:10.306 [2024-07-24 10:49:17.646091] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.646098] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.646104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.306 [2024-07-24 10:49:17.646124] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.306 [2024-07-24 10:49:17.646128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:27:10.306 [2024-07-24 10:49:17.646132] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.646139] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.646145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.306 [2024-07-24 10:49:17.646168] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.306 [2024-07-24 10:49:17.646172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:27:10.306 [2024-07-24 10:49:17.646176] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.646183] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.646189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.306 [2024-07-24 10:49:17.646209] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.306 [2024-07-24 10:49:17.646213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:27:10.306 [2024-07-24 10:49:17.646217] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.646224] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.646230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.306 [2024-07-24 10:49:17.646247] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.306 [2024-07-24 10:49:17.646251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:10.306 [2024-07-24 10:49:17.646255] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.646262] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.646267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.306 [2024-07-24 10:49:17.646291] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.306 [2024-07-24 10:49:17.646295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:27:10.306 [2024-07-24 10:49:17.646299] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.646306] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.646311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.306 [2024-07-24 10:49:17.646336] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.306 [2024-07-24 10:49:17.646340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:27:10.306 [2024-07-24 10:49:17.646344] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.646351] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.646357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.306 [2024-07-24 10:49:17.646380] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.306 [2024-07-24 10:49:17.646384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:27:10.306 [2024-07-24 10:49:17.646388] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.646395] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.306 [2024-07-24 10:49:17.646400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.306 [2024-07-24 10:49:17.646420] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.307 [2024-07-24 10:49:17.646425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:27:10.307 [2024-07-24 10:49:17.646429] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646435] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.307 [2024-07-24 10:49:17.646456] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.307 [2024-07-24 10:49:17.646461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:27:10.307 [2024-07-24 10:49:17.646465] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646471] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.307 [2024-07-24 10:49:17.646503] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.307 [2024-07-24 10:49:17.646508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:10.307 [2024-07-24 10:49:17.646512] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646519] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.307 [2024-07-24 10:49:17.646546] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.307 [2024-07-24 10:49:17.646551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:27:10.307 [2024-07-24 10:49:17.646555] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646561] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.307 [2024-07-24 10:49:17.646589] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.307 [2024-07-24 10:49:17.646593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:27:10.307 [2024-07-24 10:49:17.646597] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646604] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.307 [2024-07-24 10:49:17.646635] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.307 [2024-07-24 10:49:17.646639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:27:10.307 [2024-07-24 10:49:17.646643] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646650] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.307 [2024-07-24 10:49:17.646671] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.307 [2024-07-24 10:49:17.646675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:27:10.307 [2024-07-24 10:49:17.646679] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646686] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.307 [2024-07-24 10:49:17.646707] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.307 [2024-07-24 10:49:17.646712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:27:10.307 [2024-07-24 10:49:17.646716] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646723] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.307 [2024-07-24 10:49:17.646748] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.307 [2024-07-24 10:49:17.646753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:27:10.307 [2024-07-24 10:49:17.646757] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646764] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.307 [2024-07-24 10:49:17.646785] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.307 [2024-07-24 10:49:17.646789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:27:10.307 [2024-07-24 10:49:17.646793] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646800] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.307 [2024-07-24 10:49:17.646821] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.307 [2024-07-24 10:49:17.646825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:27:10.307 [2024-07-24 10:49:17.646829] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646836] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.307 [2024-07-24 10:49:17.646862] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.307 [2024-07-24 10:49:17.646866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:27:10.307 [2024-07-24 10:49:17.646870] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646876] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.307 [2024-07-24 10:49:17.646899] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.307 [2024-07-24 10:49:17.646903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:27:10.307 [2024-07-24 10:49:17.646908] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646914] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.307 [2024-07-24 10:49:17.646939] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.307 [2024-07-24 10:49:17.646943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:27:10.307 [2024-07-24 10:49:17.646947] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646954] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.307 [2024-07-24 10:49:17.646980] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.307 [2024-07-24 10:49:17.646984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:27:10.307 [2024-07-24 10:49:17.646988] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.646994] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.647000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.307 [2024-07-24 10:49:17.647019] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.307 [2024-07-24 10:49:17.647023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:27:10.307 [2024-07-24 10:49:17.647027] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.647034] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.647039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.307 [2024-07-24 10:49:17.647058] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.307 [2024-07-24 10:49:17.647063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:27:10.307 [2024-07-24 10:49:17.647067] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182400 00:27:10.307 [2024-07-24 10:49:17.647075] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.308 [2024-07-24 10:49:17.647103] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.308 [2024-07-24 10:49:17.647108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:27:10.308 [2024-07-24 10:49:17.647112] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647119] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.308 [2024-07-24 10:49:17.647140] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.308 [2024-07-24 10:49:17.647144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:27:10.308 [2024-07-24 10:49:17.647148] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647155] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.308 [2024-07-24 10:49:17.647179] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.308 [2024-07-24 10:49:17.647183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:27:10.308 [2024-07-24 10:49:17.647188] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647194] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.308 [2024-07-24 10:49:17.647217] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.308 [2024-07-24 10:49:17.647221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:27:10.308 [2024-07-24 10:49:17.647226] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647232] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.308 [2024-07-24 10:49:17.647256] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.308 [2024-07-24 10:49:17.647260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:27:10.308 [2024-07-24 10:49:17.647265] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647271] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.308 [2024-07-24 10:49:17.647296] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.308 [2024-07-24 10:49:17.647300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:27:10.308 [2024-07-24 10:49:17.647304] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647312] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.308 [2024-07-24 10:49:17.647339] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.308 [2024-07-24 10:49:17.647344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:27:10.308 [2024-07-24 10:49:17.647348] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647354] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.308 [2024-07-24 10:49:17.647379] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.308 [2024-07-24 10:49:17.647383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:27:10.308 [2024-07-24 10:49:17.647387] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647394] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.308 [2024-07-24 10:49:17.647421] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.308 [2024-07-24 10:49:17.647425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:27:10.308 [2024-07-24 10:49:17.647430] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647436] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.308 [2024-07-24 10:49:17.647464] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.308 [2024-07-24 10:49:17.647468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:27:10.308 [2024-07-24 10:49:17.647472] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647479] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.308 [2024-07-24 10:49:17.647508] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.308 [2024-07-24 10:49:17.647512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:27:10.308 [2024-07-24 10:49:17.647517] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647523] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.308 [2024-07-24 10:49:17.647549] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.308 [2024-07-24 10:49:17.647553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:10.308 [2024-07-24 10:49:17.647559] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647566] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.308 [2024-07-24 10:49:17.647591] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.308 [2024-07-24 10:49:17.647595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:27:10.308 [2024-07-24 10:49:17.647600] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647606] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.308 [2024-07-24 10:49:17.647632] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.308 [2024-07-24 10:49:17.647636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:27:10.308 [2024-07-24 10:49:17.647641] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647647] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.308 [2024-07-24 10:49:17.647671] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.308 [2024-07-24 10:49:17.647676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:27:10.308 [2024-07-24 10:49:17.647680] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647687] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.308 [2024-07-24 10:49:17.647712] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.308 [2024-07-24 10:49:17.647716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:27:10.308 [2024-07-24 10:49:17.647721] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647727] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.308 [2024-07-24 10:49:17.647750] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.308 [2024-07-24 10:49:17.647754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:27:10.308 [2024-07-24 10:49:17.647758] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647765] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.308 [2024-07-24 10:49:17.647771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.308 [2024-07-24 10:49:17.647788] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.308 [2024-07-24 10:49:17.647792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:10.309 [2024-07-24 10:49:17.647797] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.647804] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.647810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.309 [2024-07-24 10:49:17.647825] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.309 [2024-07-24 10:49:17.647829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:27:10.309 [2024-07-24 10:49:17.647834] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.647840] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.647846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.309 [2024-07-24 10:49:17.647863] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.309 [2024-07-24 10:49:17.647867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:27:10.309 [2024-07-24 10:49:17.647871] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.647878] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.647884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.309 [2024-07-24 10:49:17.647904] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.309 [2024-07-24 10:49:17.647908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:27:10.309 [2024-07-24 10:49:17.647912] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.647919] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.647925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.309 [2024-07-24 10:49:17.647945] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.309 [2024-07-24 10:49:17.647949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:27:10.309 [2024-07-24 10:49:17.647953] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.647960] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.647965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.309 [2024-07-24 10:49:17.647986] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.309 [2024-07-24 10:49:17.647990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:27:10.309 [2024-07-24 10:49:17.647994] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648001] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.309 [2024-07-24 10:49:17.648028] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.309 [2024-07-24 10:49:17.648033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:27:10.309 [2024-07-24 10:49:17.648038] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648044] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.309 [2024-07-24 10:49:17.648069] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.309 [2024-07-24 10:49:17.648073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:27:10.309 [2024-07-24 10:49:17.648077] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648084] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.309 [2024-07-24 10:49:17.648109] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.309 [2024-07-24 10:49:17.648114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:27:10.309 [2024-07-24 10:49:17.648118] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648124] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.309 [2024-07-24 10:49:17.648149] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.309 [2024-07-24 10:49:17.648153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:27:10.309 [2024-07-24 10:49:17.648157] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648164] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.309 [2024-07-24 10:49:17.648191] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.309 [2024-07-24 10:49:17.648195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:27:10.309 [2024-07-24 10:49:17.648199] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648206] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.309 [2024-07-24 10:49:17.648227] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.309 [2024-07-24 10:49:17.648231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:27:10.309 [2024-07-24 10:49:17.648236] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648242] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.309 [2024-07-24 10:49:17.648268] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.309 [2024-07-24 10:49:17.648272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:27:10.309 [2024-07-24 10:49:17.648276] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648283] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.309 [2024-07-24 10:49:17.648309] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.309 [2024-07-24 10:49:17.648313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:27:10.309 [2024-07-24 10:49:17.648317] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648324] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.309 [2024-07-24 10:49:17.648349] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.309 [2024-07-24 10:49:17.648353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:27:10.309 [2024-07-24 10:49:17.648358] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648364] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.309 [2024-07-24 10:49:17.648391] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.309 [2024-07-24 10:49:17.648396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:27:10.309 [2024-07-24 10:49:17.648400] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648406] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.309 [2024-07-24 10:49:17.648432] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.309 [2024-07-24 10:49:17.648436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:27:10.309 [2024-07-24 10:49:17.648440] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648447] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.309 [2024-07-24 10:49:17.648471] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.309 [2024-07-24 10:49:17.648475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:27:10.309 [2024-07-24 10:49:17.648480] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182400 00:27:10.309 [2024-07-24 10:49:17.648486] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.310 [2024-07-24 10:49:17.652498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.310 [2024-07-24 10:49:17.652518] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.310 [2024-07-24 10:49:17.652522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0019 p:0 m:0 dnr:0 00:27:10.310 [2024-07-24 10:49:17.652527] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182400 00:27:10.310 [2024-07-24 10:49:17.652532] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:27:10.310 128 00:27:10.310 Transport Service Identifier: 4420 00:27:10.310 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:10.310 Transport Address: 192.168.100.8 00:27:10.310 Transport Specific Address Subtype - RDMA 00:27:10.310 RDMA QP Service Type: 1 (Reliable Connected) 00:27:10.310 RDMA Provider Type: 1 (No provider specified) 00:27:10.310 RDMA CM Service: 1 (RDMA_CM) 00:27:10.310 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:10.310 [2024-07-24 10:49:17.720688] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:27:10.310 [2024-07-24 10:49:17.720736] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2343149 ] 00:27:10.310 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.573 [2024-07-24 10:49:17.761666] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:10.573 [2024-07-24 10:49:17.761728] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:27:10.573 [2024-07-24 10:49:17.761740] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:27:10.573 [2024-07-24 10:49:17.761743] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:27:10.573 [2024-07-24 10:49:17.761763] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:10.573 [2024-07-24 10:49:17.770969] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:27:10.573 [2024-07-24 10:49:17.781227] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:27:10.573 [2024-07-24 10:49:17.781237] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:27:10.573 [2024-07-24 10:49:17.781242] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781247] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781251] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781255] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781259] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781264] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781268] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781272] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781276] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781282] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781286] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781290] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781294] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781299] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781303] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781307] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781311] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781315] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781319] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781323] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781327] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781331] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781335] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781339] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781344] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781348] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781352] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182400 00:27:10.573 [2024-07-24 10:49:17.781356] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.781360] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.781364] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.781368] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.781372] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:27:10.574 [2024-07-24 10:49:17.781376] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:27:10.574 [2024-07-24 10:49:17.781378] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:27:10.574 [2024-07-24 10:49:17.781389] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.781398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x182400 00:27:10.574 [2024-07-24 10:49:17.786496] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.574 [2024-07-24 10:49:17.786504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:10.574 [2024-07-24 10:49:17.786509] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.786514] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:10.574 [2024-07-24 10:49:17.786519] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:10.574 [2024-07-24 10:49:17.786525] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:10.574 [2024-07-24 10:49:17.786536] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.786542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.574 [2024-07-24 10:49:17.786562] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.574 [2024-07-24 10:49:17.786566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:27:10.574 [2024-07-24 10:49:17.786570] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:10.574 [2024-07-24 10:49:17.786574] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.786579] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:10.574 [2024-07-24 10:49:17.786585] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.786590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.574 [2024-07-24 10:49:17.786609] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.574 [2024-07-24 10:49:17.786613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:27:10.574 [2024-07-24 10:49:17.786617] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:10.574 [2024-07-24 10:49:17.786622] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.786626] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:10.574 [2024-07-24 10:49:17.786632] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.786638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.574 [2024-07-24 10:49:17.786658] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.574 [2024-07-24 10:49:17.786662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:10.574 [2024-07-24 10:49:17.786666] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:10.574 [2024-07-24 10:49:17.786670] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.786677] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.786682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.574 [2024-07-24 10:49:17.786698] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.574 [2024-07-24 10:49:17.786702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:10.574 [2024-07-24 10:49:17.786706] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:10.574 [2024-07-24 10:49:17.786710] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:10.574 [2024-07-24 10:49:17.786714] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.786720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:10.574 [2024-07-24 10:49:17.786824] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:10.574 [2024-07-24 10:49:17.786828] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:10.574 [2024-07-24 10:49:17.786836] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.786842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.574 [2024-07-24 10:49:17.786861] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.574 [2024-07-24 10:49:17.786865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:10.574 [2024-07-24 10:49:17.786869] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:10.574 [2024-07-24 10:49:17.786873] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.786879] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.786885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.574 [2024-07-24 10:49:17.786902] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.574 [2024-07-24 10:49:17.786906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:10.574 [2024-07-24 10:49:17.786910] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:10.574 [2024-07-24 10:49:17.786914] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:10.574 [2024-07-24 10:49:17.786918] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.786922] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:10.574 [2024-07-24 10:49:17.786928] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:10.574 [2024-07-24 10:49:17.786935] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.786941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182400 00:27:10.574 [2024-07-24 10:49:17.786982] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.574 [2024-07-24 10:49:17.786986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:10.574 [2024-07-24 10:49:17.786992] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:10.574 [2024-07-24 10:49:17.786996] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:10.574 [2024-07-24 10:49:17.786999] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:10.574 [2024-07-24 10:49:17.787004] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:10.574 [2024-07-24 10:49:17.787008] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:10.574 [2024-07-24 10:49:17.787013] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:10.574 [2024-07-24 10:49:17.787017] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.787022] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:10.574 [2024-07-24 10:49:17.787028] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.787034] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.574 [2024-07-24 10:49:17.787054] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.574 [2024-07-24 10:49:17.787058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:10.574 [2024-07-24 10:49:17.787064] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.787069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.574 [2024-07-24 10:49:17.787074] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x182400 00:27:10.574 [2024-07-24 10:49:17.787079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.574 [2024-07-24 10:49:17.787084] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.575 [2024-07-24 10:49:17.787094] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.575 [2024-07-24 10:49:17.787103] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:10.575 [2024-07-24 10:49:17.787106] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787113] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:10.575 [2024-07-24 10:49:17.787118] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787124] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.575 [2024-07-24 10:49:17.787145] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.575 [2024-07-24 10:49:17.787150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:27:10.575 [2024-07-24 10:49:17.787154] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:10.575 [2024-07-24 10:49:17.787158] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:10.575 [2024-07-24 10:49:17.787162] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787167] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:10.575 [2024-07-24 10:49:17.787172] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:10.575 [2024-07-24 10:49:17.787180] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.575 [2024-07-24 10:49:17.787207] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.575 [2024-07-24 10:49:17.787212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:27:10.575 [2024-07-24 10:49:17.787260] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:10.575 [2024-07-24 10:49:17.787264] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:10.575 [2024-07-24 10:49:17.787276] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x182400 00:27:10.575 [2024-07-24 10:49:17.787305] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.575 [2024-07-24 10:49:17.787309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:10.575 [2024-07-24 10:49:17.787316] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:10.575 [2024-07-24 10:49:17.787326] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:10.575 [2024-07-24 10:49:17.787330] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787336] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:10.575 [2024-07-24 10:49:17.787342] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182400 00:27:10.575 [2024-07-24 10:49:17.787377] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.575 [2024-07-24 10:49:17.787381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:10.575 [2024-07-24 10:49:17.787391] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:10.575 [2024-07-24 10:49:17.787395] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787401] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:10.575 [2024-07-24 10:49:17.787407] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182400 00:27:10.575 [2024-07-24 10:49:17.787439] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.575 [2024-07-24 10:49:17.787443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:10.575 [2024-07-24 10:49:17.787449] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:10.575 [2024-07-24 10:49:17.787454] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787459] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:10.575 [2024-07-24 10:49:17.787466] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:10.575 [2024-07-24 10:49:17.787471] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:10.575 [2024-07-24 10:49:17.787475] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:10.575 [2024-07-24 10:49:17.787480] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:10.575 [2024-07-24 10:49:17.787484] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:10.575 [2024-07-24 10:49:17.787487] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:10.575 [2024-07-24 10:49:17.787497] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:10.575 [2024-07-24 10:49:17.787507] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787513] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.575 [2024-07-24 10:49:17.787519] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.575 [2024-07-24 10:49:17.787532] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.575 [2024-07-24 10:49:17.787536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:10.575 [2024-07-24 10:49:17.787540] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787547] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787552] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:0 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.575 [2024-07-24 10:49:17.787558] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.575 [2024-07-24 10:49:17.787562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:10.575 [2024-07-24 10:49:17.787566] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787575] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.575 [2024-07-24 10:49:17.787579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:10.575 [2024-07-24 10:49:17.787583] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787589] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787595] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:0 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.575 [2024-07-24 10:49:17.787614] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.575 [2024-07-24 10:49:17.787618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:10.575 [2024-07-24 10:49:17.787623] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787630] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787635] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.575 [2024-07-24 10:49:17.787658] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.575 [2024-07-24 10:49:17.787662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:27:10.575 [2024-07-24 10:49:17.787666] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787677] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x182400 00:27:10.575 [2024-07-24 10:49:17.787688] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182400 00:27:10.575 [2024-07-24 10:49:17.787694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x182400 00:27:10.576 [2024-07-24 10:49:17.787700] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x182400 00:27:10.576 [2024-07-24 10:49:17.787705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x182400 00:27:10.576 [2024-07-24 10:49:17.787711] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x182400 00:27:10.576 [2024-07-24 10:49:17.787717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x182400 00:27:10.576 [2024-07-24 10:49:17.787723] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.576 [2024-07-24 10:49:17.787727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:10.576 [2024-07-24 10:49:17.787735] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182400 00:27:10.576 [2024-07-24 10:49:17.787752] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.576 [2024-07-24 10:49:17.787756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:10.576 [2024-07-24 10:49:17.787763] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182400 00:27:10.576 [2024-07-24 10:49:17.787767] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.576 [2024-07-24 10:49:17.787771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:10.576 [2024-07-24 10:49:17.787775] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182400 00:27:10.576 [2024-07-24 10:49:17.787787] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.576 [2024-07-24 10:49:17.787792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:10.576 [2024-07-24 10:49:17.787798] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182400 00:27:10.576 ===================================================== 00:27:10.576 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:10.576 ===================================================== 00:27:10.576 Controller Capabilities/Features 00:27:10.576 ================================ 00:27:10.576 Vendor ID: 8086 00:27:10.576 Subsystem Vendor ID: 8086 00:27:10.576 Serial Number: SPDK00000000000001 00:27:10.576 Model Number: SPDK bdev Controller 00:27:10.576 Firmware Version: 24.09 00:27:10.576 Recommended Arb Burst: 6 00:27:10.576 IEEE OUI Identifier: e4 d2 5c 00:27:10.576 Multi-path I/O 00:27:10.576 May have multiple subsystem ports: Yes 00:27:10.576 May have multiple controllers: Yes 00:27:10.576 Associated with SR-IOV VF: No 00:27:10.576 Max Data Transfer Size: 131072 00:27:10.576 Max Number of Namespaces: 32 00:27:10.576 Max Number of I/O Queues: 127 00:27:10.576 NVMe Specification Version (VS): 1.3 00:27:10.576 NVMe Specification Version (Identify): 1.3 00:27:10.576 Maximum Queue Entries: 128 00:27:10.576 Contiguous Queues Required: Yes 00:27:10.576 Arbitration Mechanisms Supported 00:27:10.576 Weighted Round Robin: Not Supported 00:27:10.576 Vendor Specific: Not Supported 00:27:10.576 Reset Timeout: 15000 ms 00:27:10.576 Doorbell Stride: 4 bytes 00:27:10.576 NVM Subsystem Reset: Not Supported 00:27:10.576 Command Sets Supported 00:27:10.576 NVM Command Set: Supported 00:27:10.576 Boot Partition: Not Supported 00:27:10.576 Memory Page Size Minimum: 4096 bytes 00:27:10.576 Memory Page Size Maximum: 4096 bytes 00:27:10.576 Persistent Memory Region: Not Supported 00:27:10.576 Optional Asynchronous Events Supported 00:27:10.576 Namespace Attribute Notices: Supported 00:27:10.576 Firmware Activation Notices: Not Supported 00:27:10.576 ANA Change Notices: Not Supported 00:27:10.576 PLE Aggregate Log Change Notices: Not Supported 00:27:10.576 LBA Status Info Alert Notices: Not Supported 00:27:10.576 EGE Aggregate Log Change Notices: Not Supported 00:27:10.576 Normal NVM Subsystem Shutdown event: Not Supported 00:27:10.576 Zone Descriptor Change Notices: Not Supported 00:27:10.576 Discovery Log Change Notices: Not Supported 00:27:10.576 Controller Attributes 00:27:10.576 128-bit Host Identifier: Supported 00:27:10.576 Non-Operational Permissive Mode: Not Supported 00:27:10.576 NVM Sets: Not Supported 00:27:10.576 Read Recovery Levels: Not Supported 00:27:10.576 Endurance Groups: Not Supported 00:27:10.576 Predictable Latency Mode: Not Supported 00:27:10.576 Traffic Based Keep ALive: Not Supported 00:27:10.576 Namespace Granularity: Not Supported 00:27:10.576 SQ Associations: Not Supported 00:27:10.576 UUID List: Not Supported 00:27:10.576 Multi-Domain Subsystem: Not Supported 00:27:10.576 Fixed Capacity Management: Not Supported 00:27:10.576 Variable Capacity Management: Not Supported 00:27:10.576 Delete Endurance Group: Not Supported 00:27:10.576 Delete NVM Set: Not Supported 00:27:10.576 Extended LBA Formats Supported: Not Supported 00:27:10.576 Flexible Data Placement Supported: Not Supported 00:27:10.576 00:27:10.576 Controller Memory Buffer Support 00:27:10.576 ================================ 00:27:10.576 Supported: No 00:27:10.576 00:27:10.576 Persistent Memory Region Support 00:27:10.576 ================================ 00:27:10.576 Supported: No 00:27:10.576 00:27:10.576 Admin Command Set Attributes 00:27:10.576 ============================ 00:27:10.576 Security Send/Receive: Not Supported 00:27:10.576 Format NVM: Not Supported 00:27:10.576 Firmware Activate/Download: Not Supported 00:27:10.576 Namespace Management: Not Supported 00:27:10.576 Device Self-Test: Not Supported 00:27:10.576 Directives: Not Supported 00:27:10.576 NVMe-MI: Not Supported 00:27:10.576 Virtualization Management: Not Supported 00:27:10.576 Doorbell Buffer Config: Not Supported 00:27:10.576 Get LBA Status Capability: Not Supported 00:27:10.576 Command & Feature Lockdown Capability: Not Supported 00:27:10.576 Abort Command Limit: 4 00:27:10.576 Async Event Request Limit: 4 00:27:10.576 Number of Firmware Slots: N/A 00:27:10.576 Firmware Slot 1 Read-Only: N/A 00:27:10.576 Firmware Activation Without Reset: N/A 00:27:10.576 Multiple Update Detection Support: N/A 00:27:10.576 Firmware Update Granularity: No Information Provided 00:27:10.576 Per-Namespace SMART Log: No 00:27:10.576 Asymmetric Namespace Access Log Page: Not Supported 00:27:10.576 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:10.576 Command Effects Log Page: Supported 00:27:10.576 Get Log Page Extended Data: Supported 00:27:10.576 Telemetry Log Pages: Not Supported 00:27:10.576 Persistent Event Log Pages: Not Supported 00:27:10.576 Supported Log Pages Log Page: May Support 00:27:10.576 Commands Supported & Effects Log Page: Not Supported 00:27:10.576 Feature Identifiers & Effects Log Page:May Support 00:27:10.576 NVMe-MI Commands & Effects Log Page: May Support 00:27:10.576 Data Area 4 for Telemetry Log: Not Supported 00:27:10.576 Error Log Page Entries Supported: 128 00:27:10.576 Keep Alive: Supported 00:27:10.576 Keep Alive Granularity: 10000 ms 00:27:10.576 00:27:10.576 NVM Command Set Attributes 00:27:10.576 ========================== 00:27:10.576 Submission Queue Entry Size 00:27:10.576 Max: 64 00:27:10.576 Min: 64 00:27:10.576 Completion Queue Entry Size 00:27:10.576 Max: 16 00:27:10.576 Min: 16 00:27:10.576 Number of Namespaces: 32 00:27:10.576 Compare Command: Supported 00:27:10.576 Write Uncorrectable Command: Not Supported 00:27:10.576 Dataset Management Command: Supported 00:27:10.576 Write Zeroes Command: Supported 00:27:10.576 Set Features Save Field: Not Supported 00:27:10.576 Reservations: Supported 00:27:10.576 Timestamp: Not Supported 00:27:10.576 Copy: Supported 00:27:10.576 Volatile Write Cache: Present 00:27:10.576 Atomic Write Unit (Normal): 1 00:27:10.576 Atomic Write Unit (PFail): 1 00:27:10.576 Atomic Compare & Write Unit: 1 00:27:10.576 Fused Compare & Write: Supported 00:27:10.576 Scatter-Gather List 00:27:10.576 SGL Command Set: Supported 00:27:10.576 SGL Keyed: Supported 00:27:10.576 SGL Bit Bucket Descriptor: Not Supported 00:27:10.576 SGL Metadata Pointer: Not Supported 00:27:10.576 Oversized SGL: Not Supported 00:27:10.576 SGL Metadata Address: Not Supported 00:27:10.576 SGL Offset: Supported 00:27:10.576 Transport SGL Data Block: Not Supported 00:27:10.576 Replay Protected Memory Block: Not Supported 00:27:10.576 00:27:10.576 Firmware Slot Information 00:27:10.576 ========================= 00:27:10.576 Active slot: 1 00:27:10.576 Slot 1 Firmware Revision: 24.09 00:27:10.576 00:27:10.576 00:27:10.576 Commands Supported and Effects 00:27:10.576 ============================== 00:27:10.576 Admin Commands 00:27:10.576 -------------- 00:27:10.576 Get Log Page (02h): Supported 00:27:10.576 Identify (06h): Supported 00:27:10.576 Abort (08h): Supported 00:27:10.576 Set Features (09h): Supported 00:27:10.576 Get Features (0Ah): Supported 00:27:10.576 Asynchronous Event Request (0Ch): Supported 00:27:10.576 Keep Alive (18h): Supported 00:27:10.576 I/O Commands 00:27:10.576 ------------ 00:27:10.577 Flush (00h): Supported LBA-Change 00:27:10.577 Write (01h): Supported LBA-Change 00:27:10.577 Read (02h): Supported 00:27:10.577 Compare (05h): Supported 00:27:10.577 Write Zeroes (08h): Supported LBA-Change 00:27:10.577 Dataset Management (09h): Supported LBA-Change 00:27:10.577 Copy (19h): Supported LBA-Change 00:27:10.577 00:27:10.577 Error Log 00:27:10.577 ========= 00:27:10.577 00:27:10.577 Arbitration 00:27:10.577 =========== 00:27:10.577 Arbitration Burst: 1 00:27:10.577 00:27:10.577 Power Management 00:27:10.577 ================ 00:27:10.577 Number of Power States: 1 00:27:10.577 Current Power State: Power State #0 00:27:10.577 Power State #0: 00:27:10.577 Max Power: 0.00 W 00:27:10.577 Non-Operational State: Operational 00:27:10.577 Entry Latency: Not Reported 00:27:10.577 Exit Latency: Not Reported 00:27:10.577 Relative Read Throughput: 0 00:27:10.577 Relative Read Latency: 0 00:27:10.577 Relative Write Throughput: 0 00:27:10.577 Relative Write Latency: 0 00:27:10.577 Idle Power: Not Reported 00:27:10.577 Active Power: Not Reported 00:27:10.577 Non-Operational Permissive Mode: Not Supported 00:27:10.577 00:27:10.577 Health Information 00:27:10.577 ================== 00:27:10.577 Critical Warnings: 00:27:10.577 Available Spare Space: OK 00:27:10.577 Temperature: OK 00:27:10.577 Device Reliability: OK 00:27:10.577 Read Only: No 00:27:10.577 Volatile Memory Backup: OK 00:27:10.577 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:10.577 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:10.577 Available Spare: 0% 00:27:10.577 Available Spare Threshold: 0% 00:27:10.577 Life Percentage [2024-07-24 10:49:17.787870] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.787878] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.577 [2024-07-24 10:49:17.787894] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.577 [2024-07-24 10:49:17.787899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:10.577 [2024-07-24 10:49:17.787903] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.787925] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:10.577 [2024-07-24 10:49:17.787932] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 38506 doesn't match qid 00:27:10.577 [2024-07-24 10:49:17.787944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32519 cdw0:5 sqhd:ef40 p:0 m:0 dnr:0 00:27:10.577 [2024-07-24 10:49:17.787949] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 38506 doesn't match qid 00:27:10.577 [2024-07-24 10:49:17.787954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32519 cdw0:5 sqhd:ef40 p:0 m:0 dnr:0 00:27:10.577 [2024-07-24 10:49:17.787959] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 38506 doesn't match qid 00:27:10.577 [2024-07-24 10:49:17.787965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32519 cdw0:5 sqhd:ef40 p:0 m:0 dnr:0 00:27:10.577 [2024-07-24 10:49:17.787969] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 38506 doesn't match qid 00:27:10.577 [2024-07-24 10:49:17.787974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32519 cdw0:5 sqhd:ef40 p:0 m:0 dnr:0 00:27:10.577 [2024-07-24 10:49:17.787981] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.787987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.577 [2024-07-24 10:49:17.788008] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.577 [2024-07-24 10:49:17.788012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:27:10.577 [2024-07-24 10:49:17.788018] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.788024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.577 [2024-07-24 10:49:17.788028] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.788048] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.577 [2024-07-24 10:49:17.788052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:10.577 [2024-07-24 10:49:17.788056] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:10.577 [2024-07-24 10:49:17.788059] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:10.577 [2024-07-24 10:49:17.788063] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.788070] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.788075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.577 [2024-07-24 10:49:17.788093] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.577 [2024-07-24 10:49:17.788097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:27:10.577 [2024-07-24 10:49:17.788102] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.788110] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.788116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.577 [2024-07-24 10:49:17.788137] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.577 [2024-07-24 10:49:17.788141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:27:10.577 [2024-07-24 10:49:17.788145] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.788152] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.788158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.577 [2024-07-24 10:49:17.788176] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.577 [2024-07-24 10:49:17.788180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:27:10.577 [2024-07-24 10:49:17.788184] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.788191] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.788197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.577 [2024-07-24 10:49:17.788220] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.577 [2024-07-24 10:49:17.788224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:27:10.577 [2024-07-24 10:49:17.788228] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.788235] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.788240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.577 [2024-07-24 10:49:17.788262] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.577 [2024-07-24 10:49:17.788266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:27:10.577 [2024-07-24 10:49:17.788271] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.788278] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.788283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.577 [2024-07-24 10:49:17.788303] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.577 [2024-07-24 10:49:17.788308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:27:10.577 [2024-07-24 10:49:17.788312] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.788319] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.788325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.577 [2024-07-24 10:49:17.788345] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.577 [2024-07-24 10:49:17.788350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:10.577 [2024-07-24 10:49:17.788358] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.788365] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.788371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.577 [2024-07-24 10:49:17.788387] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.577 [2024-07-24 10:49:17.788391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:27:10.577 [2024-07-24 10:49:17.788395] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.788402] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.577 [2024-07-24 10:49:17.788408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.577 [2024-07-24 10:49:17.788429] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.578 [2024-07-24 10:49:17.788433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:27:10.578 [2024-07-24 10:49:17.788438] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788444] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.578 [2024-07-24 10:49:17.788474] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.578 [2024-07-24 10:49:17.788479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:27:10.578 [2024-07-24 10:49:17.788483] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788489] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.578 [2024-07-24 10:49:17.788524] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.578 [2024-07-24 10:49:17.788529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:27:10.578 [2024-07-24 10:49:17.788533] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788540] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.578 [2024-07-24 10:49:17.788565] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.578 [2024-07-24 10:49:17.788569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:27:10.578 [2024-07-24 10:49:17.788574] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788580] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.578 [2024-07-24 10:49:17.788603] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.578 [2024-07-24 10:49:17.788607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:10.578 [2024-07-24 10:49:17.788612] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788619] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.578 [2024-07-24 10:49:17.788644] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.578 [2024-07-24 10:49:17.788648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:27:10.578 [2024-07-24 10:49:17.788653] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788659] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.578 [2024-07-24 10:49:17.788683] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.578 [2024-07-24 10:49:17.788687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:27:10.578 [2024-07-24 10:49:17.788691] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788698] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.578 [2024-07-24 10:49:17.788726] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.578 [2024-07-24 10:49:17.788731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:27:10.578 [2024-07-24 10:49:17.788735] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788742] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.578 [2024-07-24 10:49:17.788772] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.578 [2024-07-24 10:49:17.788776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:27:10.578 [2024-07-24 10:49:17.788780] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788787] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.578 [2024-07-24 10:49:17.788808] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.578 [2024-07-24 10:49:17.788812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:27:10.578 [2024-07-24 10:49:17.788817] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788823] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.578 [2024-07-24 10:49:17.788847] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.578 [2024-07-24 10:49:17.788853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:27:10.578 [2024-07-24 10:49:17.788857] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788864] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.578 [2024-07-24 10:49:17.788889] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.578 [2024-07-24 10:49:17.788893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:27:10.578 [2024-07-24 10:49:17.788897] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788904] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.578 [2024-07-24 10:49:17.788931] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.578 [2024-07-24 10:49:17.788935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:27:10.578 [2024-07-24 10:49:17.788940] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788946] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.578 [2024-07-24 10:49:17.788968] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.578 [2024-07-24 10:49:17.788973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:27:10.578 [2024-07-24 10:49:17.788977] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788983] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.788989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.578 [2024-07-24 10:49:17.789012] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.578 [2024-07-24 10:49:17.789016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:27:10.578 [2024-07-24 10:49:17.789020] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.789027] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.578 [2024-07-24 10:49:17.789032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.579 [2024-07-24 10:49:17.789049] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.579 [2024-07-24 10:49:17.789053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:27:10.579 [2024-07-24 10:49:17.789057] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789064] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.579 [2024-07-24 10:49:17.789090] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.579 [2024-07-24 10:49:17.789094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:27:10.579 [2024-07-24 10:49:17.789098] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789105] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.579 [2024-07-24 10:49:17.789126] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.579 [2024-07-24 10:49:17.789130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:27:10.579 [2024-07-24 10:49:17.789134] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789141] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.579 [2024-07-24 10:49:17.789171] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.579 [2024-07-24 10:49:17.789175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:27:10.579 [2024-07-24 10:49:17.789179] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789186] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.579 [2024-07-24 10:49:17.789210] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.579 [2024-07-24 10:49:17.789214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:27:10.579 [2024-07-24 10:49:17.789218] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789224] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.579 [2024-07-24 10:49:17.789245] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.579 [2024-07-24 10:49:17.789249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:27:10.579 [2024-07-24 10:49:17.789253] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789260] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.579 [2024-07-24 10:49:17.789288] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.579 [2024-07-24 10:49:17.789293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:27:10.579 [2024-07-24 10:49:17.789297] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789303] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.579 [2024-07-24 10:49:17.789324] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.579 [2024-07-24 10:49:17.789328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:27:10.579 [2024-07-24 10:49:17.789332] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789339] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.579 [2024-07-24 10:49:17.789365] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.579 [2024-07-24 10:49:17.789370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:27:10.579 [2024-07-24 10:49:17.789374] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789381] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.579 [2024-07-24 10:49:17.789403] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.579 [2024-07-24 10:49:17.789407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:27:10.579 [2024-07-24 10:49:17.789411] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789418] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.579 [2024-07-24 10:49:17.789438] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.579 [2024-07-24 10:49:17.789442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:27:10.579 [2024-07-24 10:49:17.789446] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789453] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.579 [2024-07-24 10:49:17.789477] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.579 [2024-07-24 10:49:17.789481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:27:10.579 [2024-07-24 10:49:17.789485] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789498] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.579 [2024-07-24 10:49:17.789521] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.579 [2024-07-24 10:49:17.789525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:27:10.579 [2024-07-24 10:49:17.789529] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789536] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.579 [2024-07-24 10:49:17.789565] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.579 [2024-07-24 10:49:17.789569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:27:10.579 [2024-07-24 10:49:17.789574] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789580] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.579 [2024-07-24 10:49:17.789607] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.579 [2024-07-24 10:49:17.789611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:27:10.579 [2024-07-24 10:49:17.789615] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789622] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.579 [2024-07-24 10:49:17.789647] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.579 [2024-07-24 10:49:17.789651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:10.579 [2024-07-24 10:49:17.789655] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789662] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.579 [2024-07-24 10:49:17.789689] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.579 [2024-07-24 10:49:17.789693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:27:10.579 [2024-07-24 10:49:17.789697] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789704] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.579 [2024-07-24 10:49:17.789709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.579 [2024-07-24 10:49:17.789732] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.579 [2024-07-24 10:49:17.789736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:27:10.580 [2024-07-24 10:49:17.789740] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.789747] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.789753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.580 [2024-07-24 10:49:17.789775] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.580 [2024-07-24 10:49:17.789779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:27:10.580 [2024-07-24 10:49:17.789783] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.789790] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.789797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.580 [2024-07-24 10:49:17.789818] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.580 [2024-07-24 10:49:17.789822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:27:10.580 [2024-07-24 10:49:17.789826] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.789833] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.789838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.580 [2024-07-24 10:49:17.789863] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.580 [2024-07-24 10:49:17.789867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:27:10.580 [2024-07-24 10:49:17.789871] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.789877] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.789883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.580 [2024-07-24 10:49:17.789903] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.580 [2024-07-24 10:49:17.789907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:10.580 [2024-07-24 10:49:17.789911] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.789918] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.789923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.580 [2024-07-24 10:49:17.789939] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.580 [2024-07-24 10:49:17.789943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:27:10.580 [2024-07-24 10:49:17.789947] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.789953] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.789959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.580 [2024-07-24 10:49:17.789976] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.580 [2024-07-24 10:49:17.789980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:27:10.580 [2024-07-24 10:49:17.789984] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.789991] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.789996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.580 [2024-07-24 10:49:17.790011] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.580 [2024-07-24 10:49:17.790015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:27:10.580 [2024-07-24 10:49:17.790019] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.790026] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.790032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.580 [2024-07-24 10:49:17.790057] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.580 [2024-07-24 10:49:17.790061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:27:10.580 [2024-07-24 10:49:17.790065] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.790072] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.790078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.580 [2024-07-24 10:49:17.790092] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.580 [2024-07-24 10:49:17.790096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:27:10.580 [2024-07-24 10:49:17.790101] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.790107] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.790113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.580 [2024-07-24 10:49:17.790134] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.580 [2024-07-24 10:49:17.790138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:27:10.580 [2024-07-24 10:49:17.790142] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.790149] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.790154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.580 [2024-07-24 10:49:17.790174] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.580 [2024-07-24 10:49:17.790178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:27:10.580 [2024-07-24 10:49:17.790182] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.790189] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.790194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.580 [2024-07-24 10:49:17.790214] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.580 [2024-07-24 10:49:17.790218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:27:10.580 [2024-07-24 10:49:17.790222] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.790229] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.790234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.580 [2024-07-24 10:49:17.790254] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.580 [2024-07-24 10:49:17.790258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:27:10.580 [2024-07-24 10:49:17.790262] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.790270] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.790276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.580 [2024-07-24 10:49:17.790300] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.580 [2024-07-24 10:49:17.790304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:27:10.580 [2024-07-24 10:49:17.790308] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.790315] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.790320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.580 [2024-07-24 10:49:17.790342] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.580 [2024-07-24 10:49:17.790346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:27:10.580 [2024-07-24 10:49:17.790350] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.790357] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.790362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.580 [2024-07-24 10:49:17.790382] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.580 [2024-07-24 10:49:17.790386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:27:10.580 [2024-07-24 10:49:17.790390] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.790397] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.790402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.580 [2024-07-24 10:49:17.790422] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.580 [2024-07-24 10:49:17.790426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:27:10.580 [2024-07-24 10:49:17.790431] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.790437] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.580 [2024-07-24 10:49:17.790443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.580 [2024-07-24 10:49:17.790464] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.581 [2024-07-24 10:49:17.790468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:27:10.581 [2024-07-24 10:49:17.790472] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182400 00:27:10.581 [2024-07-24 10:49:17.790479] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.581 [2024-07-24 10:49:17.790484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.581 [2024-07-24 10:49:17.794494] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.581 [2024-07-24 10:49:17.794500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:27:10.581 [2024-07-24 10:49:17.794505] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182400 00:27:10.581 [2024-07-24 10:49:17.794513] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182400 00:27:10.581 [2024-07-24 10:49:17.794519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:10.581 [2024-07-24 10:49:17.794538] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:10.581 [2024-07-24 10:49:17.794542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0017 p:0 m:0 dnr:0 00:27:10.581 [2024-07-24 10:49:17.794547] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182400 00:27:10.581 [2024-07-24 10:49:17.794551] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:27:10.581 Used: 0% 00:27:10.581 Data Units Read: 0 00:27:10.581 Data Units Written: 0 00:27:10.581 Host Read Commands: 0 00:27:10.581 Host Write Commands: 0 00:27:10.581 Controller Busy Time: 0 minutes 00:27:10.581 Power Cycles: 0 00:27:10.581 Power On Hours: 0 hours 00:27:10.581 Unsafe Shutdowns: 0 00:27:10.581 Unrecoverable Media Errors: 0 00:27:10.581 Lifetime Error Log Entries: 0 00:27:10.581 Warning Temperature Time: 0 minutes 00:27:10.581 Critical Temperature Time: 0 minutes 00:27:10.581 00:27:10.581 Number of Queues 00:27:10.581 ================ 00:27:10.581 Number of I/O Submission Queues: 127 00:27:10.581 Number of I/O Completion Queues: 127 00:27:10.581 00:27:10.581 Active Namespaces 00:27:10.581 ================= 00:27:10.581 Namespace ID:1 00:27:10.581 Error Recovery Timeout: Unlimited 00:27:10.581 Command Set Identifier: NVM (00h) 00:27:10.581 Deallocate: Supported 00:27:10.581 Deallocated/Unwritten Error: Not Supported 00:27:10.581 Deallocated Read Value: Unknown 00:27:10.581 Deallocate in Write Zeroes: Not Supported 00:27:10.581 Deallocated Guard Field: 0xFFFF 00:27:10.581 Flush: Supported 00:27:10.581 Reservation: Supported 00:27:10.581 Namespace Sharing Capabilities: Multiple Controllers 00:27:10.581 Size (in LBAs): 131072 (0GiB) 00:27:10.581 Capacity (in LBAs): 131072 (0GiB) 00:27:10.581 Utilization (in LBAs): 131072 (0GiB) 00:27:10.581 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:10.581 EUI64: ABCDEF0123456789 00:27:10.581 UUID: 026e2ffb-28ef-4fbf-9abd-536f1cb490a4 00:27:10.581 Thin Provisioning: Not Supported 00:27:10.581 Per-NS Atomic Units: Yes 00:27:10.581 Atomic Boundary Size (Normal): 0 00:27:10.581 Atomic Boundary Size (PFail): 0 00:27:10.581 Atomic Boundary Offset: 0 00:27:10.581 Maximum Single Source Range Length: 65535 00:27:10.581 Maximum Copy Length: 65535 00:27:10.581 Maximum Source Range Count: 1 00:27:10.581 NGUID/EUI64 Never Reused: No 00:27:10.581 Namespace Write Protected: No 00:27:10.581 Number of LBA Formats: 1 00:27:10.581 Current LBA Format: LBA Format #00 00:27:10.581 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:10.581 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:10.581 rmmod nvme_rdma 00:27:10.581 rmmod nvme_fabrics 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2342918 ']' 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2342918 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 2342918 ']' 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 2342918 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2342918 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2342918' 00:27:10.581 killing process with pid 2342918 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 2342918 00:27:10.581 10:49:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 2342918 00:27:10.840 10:49:18 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:10.840 10:49:18 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:10.840 00:27:10.840 real 0m6.761s 00:27:10.840 user 0m5.499s 00:27:10.840 sys 0m4.507s 00:27:10.840 10:49:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:10.840 10:49:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:10.840 ************************************ 00:27:10.840 END TEST nvmf_identify 00:27:10.840 ************************************ 00:27:10.840 10:49:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:27:10.840 10:49:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:10.840 10:49:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:10.840 10:49:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.840 ************************************ 00:27:10.840 START TEST nvmf_perf 00:27:10.840 ************************************ 00:27:10.840 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:27:11.098 * Looking for test storage... 00:27:11.098 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:11.098 10:49:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:27:16.441 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:27:16.441 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:27:16.441 Found net devices under 0000:da:00.0: mlx_0_0 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:27:16.441 Found net devices under 0000:da:00.1: mlx_0_1 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # rdma_device_init 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # uname 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:16.441 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:27:16.442 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:16.442 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:27:16.442 altname enp218s0f0np0 00:27:16.442 altname ens818f0np0 00:27:16.442 inet 192.168.100.8/24 scope global mlx_0_0 00:27:16.442 valid_lft forever preferred_lft forever 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:27:16.442 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:16.442 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:27:16.442 altname enp218s0f1np1 00:27:16.442 altname ens818f1np1 00:27:16.442 inet 192.168.100.9/24 scope global mlx_0_1 00:27:16.442 valid_lft forever preferred_lft forever 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:16.442 192.168.100.9' 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:16.442 192.168.100.9' 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # head -n 1 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # head -n 1 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:16.442 192.168.100.9' 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # tail -n +2 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2346212 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2346212 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 2346212 ']' 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:16.442 10:49:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:16.442 [2024-07-24 10:49:23.865506] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:27:16.442 [2024-07-24 10:49:23.865550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.442 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.701 [2024-07-24 10:49:23.919904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:16.701 [2024-07-24 10:49:23.961295] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.701 [2024-07-24 10:49:23.961335] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.701 [2024-07-24 10:49:23.961346] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.701 [2024-07-24 10:49:23.961351] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.701 [2024-07-24 10:49:23.961356] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.701 [2024-07-24 10:49:23.961399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.701 [2024-07-24 10:49:23.961501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.701 [2024-07-24 10:49:23.961586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:16.701 [2024-07-24 10:49:23.961587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.701 10:49:24 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:16.701 10:49:24 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:27:16.701 10:49:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:16.701 10:49:24 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:16.701 10:49:24 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:16.701 10:49:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.701 10:49:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:16.701 10:49:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:19.981 10:49:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:19.981 10:49:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:19.981 10:49:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5f:00.0 00:27:19.981 10:49:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:20.240 10:49:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:20.240 10:49:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5f:00.0 ']' 00:27:20.240 10:49:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:20.240 10:49:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:27:20.240 10:49:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:27:20.240 [2024-07-24 10:49:27.675775] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:27:20.499 [2024-07-24 10:49:27.695374] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x123fe40/0x124cf30) succeed. 00:27:20.499 [2024-07-24 10:49:27.704538] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1241430/0x128e5c0) succeed. 00:27:20.499 10:49:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:20.757 10:49:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:20.757 10:49:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:20.757 10:49:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:20.757 10:49:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:21.016 10:49:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:21.274 [2024-07-24 10:49:28.511779] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:21.274 10:49:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:21.533 10:49:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5f:00.0 ']' 00:27:21.533 10:49:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:27:21.533 10:49:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:21.533 10:49:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:27:22.912 Initializing NVMe Controllers 00:27:22.912 Attached to NVMe Controller at 0000:5f:00.0 [8086:0a54] 00:27:22.912 Associating PCIE (0000:5f:00.0) NSID 1 with lcore 0 00:27:22.912 Initialization complete. Launching workers. 00:27:22.912 ======================================================== 00:27:22.912 Latency(us) 00:27:22.912 Device Information : IOPS MiB/s Average min max 00:27:22.912 PCIE (0000:5f:00.0) NSID 1 from core 0: 99328.97 388.00 321.80 29.71 4435.51 00:27:22.912 ======================================================== 00:27:22.912 Total : 99328.97 388.00 321.80 29.71 4435.51 00:27:22.912 00:27:22.912 10:49:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:22.912 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.201 Initializing NVMe Controllers 00:27:26.201 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:26.201 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:26.201 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:26.201 Initialization complete. Launching workers. 00:27:26.201 ======================================================== 00:27:26.201 Latency(us) 00:27:26.201 Device Information : IOPS MiB/s Average min max 00:27:26.201 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6639.99 25.94 149.78 48.04 4088.10 00:27:26.201 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5214.99 20.37 190.77 69.04 4108.51 00:27:26.201 ======================================================== 00:27:26.201 Total : 11854.99 46.31 167.81 48.04 4108.51 00:27:26.201 00:27:26.201 10:49:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:26.201 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.487 Initializing NVMe Controllers 00:27:29.487 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:29.487 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:29.487 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:29.487 Initialization complete. Launching workers. 00:27:29.487 ======================================================== 00:27:29.487 Latency(us) 00:27:29.487 Device Information : IOPS MiB/s Average min max 00:27:29.487 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18067.60 70.58 1770.88 511.61 6216.13 00:27:29.487 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4049.12 15.82 7963.40 4897.82 9892.35 00:27:29.487 ======================================================== 00:27:29.487 Total : 22116.72 86.39 2904.60 511.61 9892.35 00:27:29.487 00:27:29.487 10:49:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:27:29.487 10:49:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:29.487 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.679 Initializing NVMe Controllers 00:27:33.679 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:33.679 Controller IO queue size 128, less than required. 00:27:33.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:33.679 Controller IO queue size 128, less than required. 00:27:33.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:33.679 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:33.679 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:33.679 Initialization complete. Launching workers. 00:27:33.679 ======================================================== 00:27:33.679 Latency(us) 00:27:33.679 Device Information : IOPS MiB/s Average min max 00:27:33.679 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3873.66 968.41 33095.70 15171.09 73875.11 00:27:33.679 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4056.41 1014.10 31373.68 15344.15 48424.82 00:27:33.679 ======================================================== 00:27:33.679 Total : 7930.06 1982.52 32214.85 15171.09 73875.11 00:27:33.679 00:27:33.679 10:49:41 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:27:33.679 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.938 No valid NVMe controllers or AIO or URING devices found 00:27:33.938 Initializing NVMe Controllers 00:27:33.938 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:33.938 Controller IO queue size 128, less than required. 00:27:33.938 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:33.938 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:33.938 Controller IO queue size 128, less than required. 00:27:33.938 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:33.938 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:33.938 WARNING: Some requested NVMe devices were skipped 00:27:34.196 10:49:41 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:27:34.196 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.392 Initializing NVMe Controllers 00:27:38.392 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:38.392 Controller IO queue size 128, less than required. 00:27:38.392 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:38.392 Controller IO queue size 128, less than required. 00:27:38.392 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:38.392 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:38.392 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:38.392 Initialization complete. Launching workers. 00:27:38.392 00:27:38.392 ==================== 00:27:38.392 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:38.392 RDMA transport: 00:27:38.392 dev name: mlx5_0 00:27:38.392 polls: 402940 00:27:38.392 idle_polls: 399713 00:27:38.392 completions: 43478 00:27:38.392 queued_requests: 1 00:27:38.392 total_send_wrs: 21739 00:27:38.392 send_doorbell_updates: 2971 00:27:38.392 total_recv_wrs: 21866 00:27:38.392 recv_doorbell_updates: 2973 00:27:38.392 --------------------------------- 00:27:38.392 00:27:38.392 ==================== 00:27:38.392 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:38.392 RDMA transport: 00:27:38.392 dev name: mlx5_0 00:27:38.392 polls: 404503 00:27:38.392 idle_polls: 404236 00:27:38.392 completions: 20254 00:27:38.392 queued_requests: 1 00:27:38.392 total_send_wrs: 10127 00:27:38.392 send_doorbell_updates: 256 00:27:38.392 total_recv_wrs: 10254 00:27:38.392 recv_doorbell_updates: 257 00:27:38.392 --------------------------------- 00:27:38.392 ======================================================== 00:27:38.392 Latency(us) 00:27:38.392 Device Information : IOPS MiB/s Average min max 00:27:38.392 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5433.05 1358.26 23608.79 11314.90 59828.72 00:27:38.392 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2530.82 632.71 50654.92 31302.89 74339.23 00:27:38.392 ======================================================== 00:27:38.392 Total : 7963.87 1990.97 32203.73 11314.90 74339.23 00:27:38.392 00:27:38.392 10:49:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:38.392 10:49:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:38.652 10:49:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:38.652 10:49:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5f:00.0 ']' 00:27:38.652 10:49:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:27:45.218 10:49:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=e19f0bb4-f3cc-49ad-b614-48d35da8f4f8 00:27:45.218 10:49:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb e19f0bb4-f3cc-49ad-b614-48d35da8f4f8 00:27:45.218 10:49:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=e19f0bb4-f3cc-49ad-b614-48d35da8f4f8 00:27:45.218 10:49:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:45.218 10:49:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:27:45.218 10:49:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:27:45.218 10:49:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:45.218 10:49:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:45.218 { 00:27:45.218 "uuid": "e19f0bb4-f3cc-49ad-b614-48d35da8f4f8", 00:27:45.218 "name": "lvs_0", 00:27:45.218 "base_bdev": "Nvme0n1", 00:27:45.218 "total_data_clusters": 381173, 00:27:45.218 "free_clusters": 381173, 00:27:45.218 "block_size": 512, 00:27:45.218 "cluster_size": 4194304 00:27:45.218 } 00:27:45.218 ]' 00:27:45.218 10:49:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="e19f0bb4-f3cc-49ad-b614-48d35da8f4f8") .free_clusters' 00:27:45.218 10:49:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=381173 00:27:45.218 10:49:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="e19f0bb4-f3cc-49ad-b614-48d35da8f4f8") .cluster_size' 00:27:45.218 10:49:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:45.218 10:49:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=1524692 00:27:45.218 10:49:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 1524692 00:27:45.218 1524692 00:27:45.218 10:49:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1524692 -gt 20480 ']' 00:27:45.218 10:49:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:27:45.218 10:49:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e19f0bb4-f3cc-49ad-b614-48d35da8f4f8 lbd_0 20480 00:27:45.218 10:49:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=e20e2034-2dfa-45ab-82b7-b0850e7e5d8a 00:27:45.218 10:49:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore e20e2034-2dfa-45ab-82b7-b0850e7e5d8a lvs_n_0 00:27:45.785 10:49:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=c6ebf914-c176-4ae1-aae9-f932fc1d28bd 00:27:45.785 10:49:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb c6ebf914-c176-4ae1-aae9-f932fc1d28bd 00:27:45.785 10:49:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=c6ebf914-c176-4ae1-aae9-f932fc1d28bd 00:27:45.785 10:49:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:45.785 10:49:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:27:45.785 10:49:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:27:45.785 10:49:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:45.785 10:49:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:45.785 { 00:27:45.785 "uuid": "e19f0bb4-f3cc-49ad-b614-48d35da8f4f8", 00:27:45.785 "name": "lvs_0", 00:27:45.786 "base_bdev": "Nvme0n1", 00:27:45.786 "total_data_clusters": 381173, 00:27:45.786 "free_clusters": 376053, 00:27:45.786 "block_size": 512, 00:27:45.786 "cluster_size": 4194304 00:27:45.786 }, 00:27:45.786 { 00:27:45.786 "uuid": "c6ebf914-c176-4ae1-aae9-f932fc1d28bd", 00:27:45.786 "name": "lvs_n_0", 00:27:45.786 "base_bdev": "e20e2034-2dfa-45ab-82b7-b0850e7e5d8a", 00:27:45.786 "total_data_clusters": 5114, 00:27:45.786 "free_clusters": 5114, 00:27:45.786 "block_size": 512, 00:27:45.786 "cluster_size": 4194304 00:27:45.786 } 00:27:45.786 ]' 00:27:45.786 10:49:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="c6ebf914-c176-4ae1-aae9-f932fc1d28bd") .free_clusters' 00:27:45.786 10:49:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:27:45.786 10:49:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="c6ebf914-c176-4ae1-aae9-f932fc1d28bd") .cluster_size' 00:27:46.044 10:49:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:46.044 10:49:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:27:46.044 10:49:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:27:46.044 20456 00:27:46.044 10:49:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:27:46.044 10:49:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c6ebf914-c176-4ae1-aae9-f932fc1d28bd lbd_nest_0 20456 00:27:46.044 10:49:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=9db904e6-58a6-4ccd-bf9e-28e495b0a2d4 00:27:46.044 10:49:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:46.303 10:49:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:27:46.303 10:49:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 9db904e6-58a6-4ccd-bf9e-28e495b0a2d4 00:27:46.562 10:49:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:46.562 10:49:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:27:46.562 10:49:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:27:46.562 10:49:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:46.562 10:49:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:46.562 10:49:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:46.562 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.834 Initializing NVMe Controllers 00:27:58.834 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:58.834 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:58.834 Initialization complete. Launching workers. 00:27:58.834 ======================================================== 00:27:58.834 Latency(us) 00:27:58.834 Device Information : IOPS MiB/s Average min max 00:27:58.834 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5753.20 2.81 173.61 70.09 7254.29 00:27:58.834 ======================================================== 00:27:58.834 Total : 5753.20 2.81 173.61 70.09 7254.29 00:27:58.834 00:27:58.834 10:50:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:58.834 10:50:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:58.834 EAL: No free 2048 kB hugepages reported on node 1 00:28:11.039 Initializing NVMe Controllers 00:28:11.039 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:11.039 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:11.039 Initialization complete. Launching workers. 00:28:11.039 ======================================================== 00:28:11.039 Latency(us) 00:28:11.039 Device Information : IOPS MiB/s Average min max 00:28:11.039 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2638.60 329.82 378.63 157.89 8134.58 00:28:11.039 ======================================================== 00:28:11.039 Total : 2638.60 329.82 378.63 157.89 8134.58 00:28:11.039 00:28:11.039 10:50:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:11.039 10:50:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:11.039 10:50:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:11.039 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.018 Initializing NVMe Controllers 00:28:21.018 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:21.018 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:21.018 Initialization complete. Launching workers. 00:28:21.018 ======================================================== 00:28:21.018 Latency(us) 00:28:21.018 Device Information : IOPS MiB/s Average min max 00:28:21.018 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11159.54 5.45 2867.03 985.70 9445.36 00:28:21.018 ======================================================== 00:28:21.018 Total : 11159.54 5.45 2867.03 985.70 9445.36 00:28:21.018 00:28:21.018 10:50:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:21.018 10:50:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:21.018 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.225 Initializing NVMe Controllers 00:28:33.225 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:33.225 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:33.225 Initialization complete. Launching workers. 00:28:33.225 ======================================================== 00:28:33.225 Latency(us) 00:28:33.225 Device Information : IOPS MiB/s Average min max 00:28:33.225 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4002.60 500.32 8000.49 4897.49 16019.95 00:28:33.225 ======================================================== 00:28:33.225 Total : 4002.60 500.32 8000.49 4897.49 16019.95 00:28:33.225 00:28:33.225 10:50:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:33.225 10:50:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:33.225 10:50:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:33.225 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.433 Initializing NVMe Controllers 00:28:45.433 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:45.433 Controller IO queue size 128, less than required. 00:28:45.433 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.433 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:45.433 Initialization complete. Launching workers. 00:28:45.433 ======================================================== 00:28:45.433 Latency(us) 00:28:45.433 Device Information : IOPS MiB/s Average min max 00:28:45.433 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18434.90 9.00 6946.29 2076.94 15781.42 00:28:45.433 ======================================================== 00:28:45.433 Total : 18434.90 9.00 6946.29 2076.94 15781.42 00:28:45.433 00:28:45.433 10:50:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:45.433 10:50:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:45.433 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.427 Initializing NVMe Controllers 00:28:55.427 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:55.427 Controller IO queue size 128, less than required. 00:28:55.427 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:55.427 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:55.427 Initialization complete. Launching workers. 00:28:55.427 ======================================================== 00:28:55.427 Latency(us) 00:28:55.427 Device Information : IOPS MiB/s Average min max 00:28:55.427 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10845.20 1355.65 11803.16 3569.52 24622.97 00:28:55.427 ======================================================== 00:28:55.427 Total : 10845.20 1355.65 11803.16 3569.52 24622.97 00:28:55.427 00:28:55.427 10:51:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:55.427 10:51:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9db904e6-58a6-4ccd-bf9e-28e495b0a2d4 00:28:55.744 10:51:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:55.744 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e20e2034-2dfa-45ab-82b7-b0850e7e5d8a 00:28:56.038 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:56.299 rmmod nvme_rdma 00:28:56.299 rmmod nvme_fabrics 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2346212 ']' 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2346212 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 2346212 ']' 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 2346212 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2346212 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2346212' 00:28:56.299 killing process with pid 2346212 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 2346212 00:28:56.299 10:51:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 2346212 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:58.839 00:28:58.839 real 1m47.435s 00:28:58.839 user 6m50.117s 00:28:58.839 sys 0m5.796s 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:58.839 ************************************ 00:28:58.839 END TEST nvmf_perf 00:28:58.839 ************************************ 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.839 ************************************ 00:28:58.839 START TEST nvmf_fio_host 00:28:58.839 ************************************ 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:28:58.839 * Looking for test storage... 00:28:58.839 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.839 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:28:58.840 10:51:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.114 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:04.114 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:04.114 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:04.114 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:04.114 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:04.114 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:04.114 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:04.114 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:29:04.115 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:29:04.115 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:29:04.115 Found net devices under 0000:da:00.0: mlx_0_0 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:29:04.115 Found net devices under 0000:da:00.1: mlx_0_1 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # rdma_device_init 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # uname 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:04.115 10:51:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:04.115 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:04.115 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:04.115 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:29:04.116 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:04.116 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:29:04.116 altname enp218s0f0np0 00:29:04.116 altname ens818f0np0 00:29:04.116 inet 192.168.100.8/24 scope global mlx_0_0 00:29:04.116 valid_lft forever preferred_lft forever 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:29:04.116 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:04.116 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:29:04.116 altname enp218s0f1np1 00:29:04.116 altname ens818f1np1 00:29:04.116 inet 192.168.100.9/24 scope global mlx_0_1 00:29:04.116 valid_lft forever preferred_lft forever 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:29:04.116 192.168.100.9' 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # head -n 1 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:29:04.116 192.168.100.9' 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # tail -n +2 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:29:04.116 192.168.100.9' 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # head -n 1 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2366073 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2366073 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 2366073 ']' 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:04.116 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.116 [2024-07-24 10:51:11.189088] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:29:04.116 [2024-07-24 10:51:11.189132] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:04.116 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.116 [2024-07-24 10:51:11.244835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:04.116 [2024-07-24 10:51:11.286267] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:04.116 [2024-07-24 10:51:11.286306] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:04.116 [2024-07-24 10:51:11.286313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:04.117 [2024-07-24 10:51:11.286319] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:04.117 [2024-07-24 10:51:11.286324] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:04.117 [2024-07-24 10:51:11.286366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.117 [2024-07-24 10:51:11.286465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:04.117 [2024-07-24 10:51:11.286556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:04.117 [2024-07-24 10:51:11.286557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.117 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:04.117 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:29:04.117 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:04.117 [2024-07-24 10:51:11.555385] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11b66a0/0x11bab70) succeed. 00:29:04.117 [2024-07-24 10:51:11.564534] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11b7c90/0x11fc200) succeed. 00:29:04.376 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:04.376 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:04.376 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.376 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:04.635 Malloc1 00:29:04.635 10:51:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:04.895 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:04.895 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:05.155 [2024-07-24 10:51:12.464350] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:05.155 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:05.414 10:51:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:05.672 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:05.672 fio-3.35 00:29:05.672 Starting 1 thread 00:29:05.672 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.206 00:29:08.206 test: (groupid=0, jobs=1): err= 0: pid=2366467: Wed Jul 24 10:51:15 2024 00:29:08.206 read: IOPS=17.3k, BW=67.6MiB/s (70.9MB/s)(136MiB/2004msec) 00:29:08.206 slat (nsec): min=1394, max=38293, avg=1546.60, stdev=503.84 00:29:08.206 clat (usec): min=2319, max=6675, avg=3670.25, stdev=109.17 00:29:08.206 lat (usec): min=2342, max=6676, avg=3671.79, stdev=109.11 00:29:08.206 clat percentiles (usec): 00:29:08.206 | 1.00th=[ 3326], 5.00th=[ 3621], 10.00th=[ 3654], 20.00th=[ 3654], 00:29:08.206 | 30.00th=[ 3654], 40.00th=[ 3654], 50.00th=[ 3654], 60.00th=[ 3654], 00:29:08.206 | 70.00th=[ 3687], 80.00th=[ 3687], 90.00th=[ 3687], 95.00th=[ 3687], 00:29:08.206 | 99.00th=[ 4015], 99.50th=[ 4146], 99.90th=[ 5211], 99.95th=[ 6128], 00:29:08.206 | 99.99th=[ 6652] 00:29:08.206 bw ( KiB/s): min=67624, max=70016, per=100.00%, avg=69244.00, stdev=1118.10, samples=4 00:29:08.206 iops : min=16906, max=17504, avg=17311.00, stdev=279.53, samples=4 00:29:08.206 write: IOPS=17.3k, BW=67.7MiB/s (71.0MB/s)(136MiB/2004msec); 0 zone resets 00:29:08.206 slat (nsec): min=1445, max=19221, avg=1637.31, stdev=516.09 00:29:08.206 clat (usec): min=2354, max=6660, avg=3668.24, stdev=100.55 00:29:08.206 lat (usec): min=2365, max=6662, avg=3669.88, stdev=100.49 00:29:08.206 clat percentiles (usec): 00:29:08.206 | 1.00th=[ 3326], 5.00th=[ 3621], 10.00th=[ 3654], 20.00th=[ 3654], 00:29:08.206 | 30.00th=[ 3654], 40.00th=[ 3654], 50.00th=[ 3654], 60.00th=[ 3654], 00:29:08.206 | 70.00th=[ 3687], 80.00th=[ 3687], 90.00th=[ 3687], 95.00th=[ 3687], 00:29:08.206 | 99.00th=[ 4015], 99.50th=[ 4146], 99.90th=[ 4752], 99.95th=[ 5669], 00:29:08.206 | 99.99th=[ 6587] 00:29:08.206 bw ( KiB/s): min=67800, max=69896, per=100.00%, avg=69318.00, stdev=1016.56, samples=4 00:29:08.206 iops : min=16950, max=17474, avg=17329.50, stdev=254.14, samples=4 00:29:08.206 lat (msec) : 4=98.53%, 10=1.47% 00:29:08.206 cpu : usr=99.55%, sys=0.05%, ctx=15, majf=0, minf=4 00:29:08.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:08.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:08.206 issued rwts: total=34689,34719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:08.206 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:08.206 00:29:08.206 Run status group 0 (all jobs): 00:29:08.206 READ: bw=67.6MiB/s (70.9MB/s), 67.6MiB/s-67.6MiB/s (70.9MB/s-70.9MB/s), io=136MiB (142MB), run=2004-2004msec 00:29:08.206 WRITE: bw=67.7MiB/s (71.0MB/s), 67.7MiB/s-67.7MiB/s (71.0MB/s-71.0MB/s), io=136MiB (142MB), run=2004-2004msec 00:29:08.206 10:51:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:29:08.206 10:51:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:29:08.206 10:51:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:08.206 10:51:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:08.206 10:51:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:08.206 10:51:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:08.206 10:51:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:08.206 10:51:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:08.206 10:51:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:08.206 10:51:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:08.206 10:51:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:08.206 10:51:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:08.206 10:51:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:08.206 10:51:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:08.206 10:51:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:08.206 10:51:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:08.206 10:51:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:08.206 10:51:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:08.206 10:51:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:08.206 10:51:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:08.206 10:51:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:08.206 10:51:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:29:08.206 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:08.206 fio-3.35 00:29:08.206 Starting 1 thread 00:29:08.465 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.997 00:29:10.997 test: (groupid=0, jobs=1): err= 0: pid=2367031: Wed Jul 24 10:51:17 2024 00:29:10.997 read: IOPS=14.1k, BW=220MiB/s (230MB/s)(433MiB/1972msec) 00:29:10.997 slat (nsec): min=2301, max=52617, avg=2676.05, stdev=1352.10 00:29:10.997 clat (usec): min=455, max=10141, avg=1687.46, stdev=1400.77 00:29:10.997 lat (usec): min=457, max=10147, avg=1690.14, stdev=1401.32 00:29:10.997 clat percentiles (usec): 00:29:10.997 | 1.00th=[ 701], 5.00th=[ 791], 10.00th=[ 857], 20.00th=[ 930], 00:29:10.997 | 30.00th=[ 1004], 40.00th=[ 1090], 50.00th=[ 1188], 60.00th=[ 1319], 00:29:10.997 | 70.00th=[ 1450], 80.00th=[ 1647], 90.00th=[ 4817], 95.00th=[ 5080], 00:29:10.997 | 99.00th=[ 6849], 99.50th=[ 7439], 99.90th=[ 8848], 99.95th=[ 9503], 00:29:10.997 | 99.99th=[10028] 00:29:10.997 bw ( KiB/s): min=107776, max=113472, per=49.06%, avg=110336.00, stdev=2360.78, samples=4 00:29:10.997 iops : min= 6736, max= 7092, avg=6896.00, stdev=147.55, samples=4 00:29:10.997 write: IOPS=8039, BW=126MiB/s (132MB/s)(225MiB/1789msec); 0 zone resets 00:29:10.997 slat (usec): min=27, max=125, avg=29.43, stdev= 5.51 00:29:10.997 clat (usec): min=4737, max=19791, avg=12883.32, stdev=1854.17 00:29:10.997 lat (usec): min=4769, max=19818, avg=12912.75, stdev=1853.81 00:29:10.997 clat percentiles (usec): 00:29:10.997 | 1.00th=[ 7898], 5.00th=[10290], 10.00th=[10814], 20.00th=[11469], 00:29:10.997 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12780], 60.00th=[13173], 00:29:10.997 | 70.00th=[13698], 80.00th=[14222], 90.00th=[15401], 95.00th=[16188], 00:29:10.997 | 99.00th=[17695], 99.50th=[18482], 99.90th=[19268], 99.95th=[19530], 00:29:10.997 | 99.99th=[19792] 00:29:10.997 bw ( KiB/s): min=108672, max=117664, per=88.82%, avg=114248.00, stdev=3882.64, samples=4 00:29:10.997 iops : min= 6792, max= 7354, avg=7140.50, stdev=242.67, samples=4 00:29:10.997 lat (usec) : 500=0.01%, 750=1.82%, 1000=17.54% 00:29:10.997 lat (msec) : 2=37.37%, 4=1.92%, 10=8.30%, 20=33.04% 00:29:10.997 cpu : usr=97.16%, sys=1.15%, ctx=218, majf=0, minf=3 00:29:10.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:29:10.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:10.997 issued rwts: total=27720,14383,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:10.997 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:10.997 00:29:10.997 Run status group 0 (all jobs): 00:29:10.997 READ: bw=220MiB/s (230MB/s), 220MiB/s-220MiB/s (230MB/s-230MB/s), io=433MiB (454MB), run=1972-1972msec 00:29:10.997 WRITE: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=225MiB (236MB), run=1789-1789msec 00:29:10.997 10:51:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:10.997 10:51:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:10.997 10:51:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:10.997 10:51:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:10.997 10:51:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:10.997 10:51:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:29:10.997 10:51:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:10.997 10:51:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:10.997 10:51:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:10.997 10:51:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:10.997 10:51:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5f:00.0 00:29:10.997 10:51:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5f:00.0 -i 192.168.100.8 00:29:14.283 Nvme0n1 00:29:14.283 10:51:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:19.552 10:51:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=a820e2ef-f0ef-4a73-8839-4b6ad907c59b 00:29:19.552 10:51:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb a820e2ef-f0ef-4a73-8839-4b6ad907c59b 00:29:19.552 10:51:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=a820e2ef-f0ef-4a73-8839-4b6ad907c59b 00:29:19.552 10:51:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:19.552 10:51:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:19.552 10:51:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:19.552 10:51:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:19.552 { 00:29:19.552 "uuid": "a820e2ef-f0ef-4a73-8839-4b6ad907c59b", 00:29:19.552 "name": "lvs_0", 00:29:19.552 "base_bdev": "Nvme0n1", 00:29:19.552 "total_data_clusters": 1489, 00:29:19.552 "free_clusters": 1489, 00:29:19.552 "block_size": 512, 00:29:19.552 "cluster_size": 1073741824 00:29:19.552 } 00:29:19.552 ]' 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="a820e2ef-f0ef-4a73-8839-4b6ad907c59b") .free_clusters' 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1489 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="a820e2ef-f0ef-4a73-8839-4b6ad907c59b") .cluster_size' 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1524736 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1524736 00:29:19.552 1524736 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1524736 00:29:19.552 1c7ac354-48a4-4efe-a9e2-dfe87407f3b9 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:19.552 10:51:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:19.810 10:51:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:19.810 10:51:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:19.810 10:51:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:19.810 10:51:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:20.068 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:20.068 fio-3.35 00:29:20.068 Starting 1 thread 00:29:20.068 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.601 00:29:22.601 test: (groupid=0, jobs=1): err= 0: pid=2369010: Wed Jul 24 10:51:29 2024 00:29:22.601 read: IOPS=10.1k, BW=39.3MiB/s (41.2MB/s)(78.9MiB/2005msec) 00:29:22.601 slat (nsec): min=1389, max=25810, avg=1517.97, stdev=346.01 00:29:22.601 clat (usec): min=159, max=268530, avg=6217.16, stdev=14165.20 00:29:22.601 lat (usec): min=160, max=268533, avg=6218.68, stdev=14165.23 00:29:22.601 clat percentiles (msec): 00:29:22.601 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:29:22.601 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:29:22.601 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:29:22.601 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 268], 99.95th=[ 271], 00:29:22.601 | 99.99th=[ 271] 00:29:22.601 bw ( KiB/s): min=20192, max=47224, per=99.92%, avg=40240.00, stdev=13368.24, samples=4 00:29:22.601 iops : min= 5048, max=11806, avg=10060.00, stdev=3342.06, samples=4 00:29:22.601 write: IOPS=10.1k, BW=39.3MiB/s (41.3MB/s)(78.9MiB/2005msec); 0 zone resets 00:29:22.601 slat (nsec): min=1439, max=17726, avg=1633.49, stdev=319.01 00:29:22.601 clat (usec): min=139, max=268929, avg=6358.93, stdev=15127.23 00:29:22.601 lat (usec): min=141, max=268932, avg=6360.56, stdev=15127.29 00:29:22.601 clat percentiles (msec): 00:29:22.601 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:29:22.601 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:29:22.601 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:29:22.601 | 99.00th=[ 6], 99.50th=[ 8], 99.90th=[ 271], 99.95th=[ 271], 00:29:22.601 | 99.99th=[ 271] 00:29:22.601 bw ( KiB/s): min=21064, max=46840, per=99.95%, avg=40270.00, stdev=12804.90, samples=4 00:29:22.601 iops : min= 5266, max=11710, avg=10067.50, stdev=3201.23, samples=4 00:29:22.601 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.03% 00:29:22.601 lat (msec) : 2=0.06%, 4=0.18%, 10=99.37%, 500=0.32% 00:29:22.601 cpu : usr=99.50%, sys=0.15%, ctx=16, majf=0, minf=4 00:29:22.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:22.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:22.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:22.601 issued rwts: total=20187,20195,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:22.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:22.601 00:29:22.601 Run status group 0 (all jobs): 00:29:22.601 READ: bw=39.3MiB/s (41.2MB/s), 39.3MiB/s-39.3MiB/s (41.2MB/s-41.2MB/s), io=78.9MiB (82.7MB), run=2005-2005msec 00:29:22.601 WRITE: bw=39.3MiB/s (41.3MB/s), 39.3MiB/s-39.3MiB/s (41.3MB/s-41.3MB/s), io=78.9MiB (82.7MB), run=2005-2005msec 00:29:22.601 10:51:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:22.601 10:51:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:23.537 10:51:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=60807a99-3772-414f-add7-ff2a26255f4f 00:29:23.537 10:51:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 60807a99-3772-414f-add7-ff2a26255f4f 00:29:23.537 10:51:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=60807a99-3772-414f-add7-ff2a26255f4f 00:29:23.537 10:51:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:23.537 10:51:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:23.537 10:51:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:23.537 10:51:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:23.796 10:51:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:23.796 { 00:29:23.796 "uuid": "a820e2ef-f0ef-4a73-8839-4b6ad907c59b", 00:29:23.796 "name": "lvs_0", 00:29:23.796 "base_bdev": "Nvme0n1", 00:29:23.796 "total_data_clusters": 1489, 00:29:23.796 "free_clusters": 0, 00:29:23.796 "block_size": 512, 00:29:23.796 "cluster_size": 1073741824 00:29:23.796 }, 00:29:23.796 { 00:29:23.796 "uuid": "60807a99-3772-414f-add7-ff2a26255f4f", 00:29:23.796 "name": "lvs_n_0", 00:29:23.796 "base_bdev": "1c7ac354-48a4-4efe-a9e2-dfe87407f3b9", 00:29:23.796 "total_data_clusters": 380811, 00:29:23.796 "free_clusters": 380811, 00:29:23.796 "block_size": 512, 00:29:23.796 "cluster_size": 4194304 00:29:23.796 } 00:29:23.796 ]' 00:29:23.796 10:51:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="60807a99-3772-414f-add7-ff2a26255f4f") .free_clusters' 00:29:23.796 10:51:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=380811 00:29:23.796 10:51:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="60807a99-3772-414f-add7-ff2a26255f4f") .cluster_size' 00:29:23.796 10:51:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:23.796 10:51:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1523244 00:29:23.796 10:51:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1523244 00:29:23.796 1523244 00:29:23.796 10:51:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1523244 00:29:24.731 d72e4d48-4582-4615-a468-14e0e77f4e13 00:29:24.731 10:51:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:24.731 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:24.989 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:29:24.989 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:24.989 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:24.989 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:24.989 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:24.989 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:24.989 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:24.989 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:24.989 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:24.989 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:24.989 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:24.989 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:24.989 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:24.989 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:24.989 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:24.989 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:25.247 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:25.247 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:25.247 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:25.247 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:25.247 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:25.247 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:25.247 10:51:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:25.505 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:25.505 fio-3.35 00:29:25.505 Starting 1 thread 00:29:25.505 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.111 00:29:28.111 test: (groupid=0, jobs=1): err= 0: pid=2370047: Wed Jul 24 10:51:35 2024 00:29:28.111 read: IOPS=9769, BW=38.2MiB/s (40.0MB/s)(76.5MiB/2005msec) 00:29:28.111 slat (nsec): min=1393, max=19468, avg=1522.42, stdev=400.18 00:29:28.111 clat (usec): min=3618, max=11059, avg=6495.10, stdev=210.55 00:29:28.111 lat (usec): min=3621, max=11060, avg=6496.62, stdev=210.51 00:29:28.111 clat percentiles (usec): 00:29:28.111 | 1.00th=[ 6390], 5.00th=[ 6390], 10.00th=[ 6456], 20.00th=[ 6456], 00:29:28.111 | 30.00th=[ 6456], 40.00th=[ 6456], 50.00th=[ 6456], 60.00th=[ 6521], 00:29:28.111 | 70.00th=[ 6521], 80.00th=[ 6521], 90.00th=[ 6521], 95.00th=[ 6587], 00:29:28.111 | 99.00th=[ 7046], 99.50th=[ 7177], 99.90th=[ 9372], 99.95th=[10552], 00:29:28.111 | 99.99th=[11076] 00:29:28.111 bw ( KiB/s): min=38048, max=39584, per=99.94%, avg=39056.00, stdev=720.65, samples=4 00:29:28.111 iops : min= 9512, max= 9896, avg=9764.00, stdev=180.16, samples=4 00:29:28.111 write: IOPS=9781, BW=38.2MiB/s (40.1MB/s)(76.6MiB/2005msec); 0 zone resets 00:29:28.111 slat (nsec): min=1446, max=18026, avg=1633.63, stdev=392.29 00:29:28.111 clat (usec): min=3587, max=11039, avg=6516.53, stdev=213.80 00:29:28.111 lat (usec): min=3591, max=11041, avg=6518.16, stdev=213.76 00:29:28.111 clat percentiles (usec): 00:29:28.111 | 1.00th=[ 6390], 5.00th=[ 6456], 10.00th=[ 6456], 20.00th=[ 6456], 00:29:28.111 | 30.00th=[ 6456], 40.00th=[ 6521], 50.00th=[ 6521], 60.00th=[ 6521], 00:29:28.111 | 70.00th=[ 6521], 80.00th=[ 6521], 90.00th=[ 6587], 95.00th=[ 6652], 00:29:28.111 | 99.00th=[ 7046], 99.50th=[ 7177], 99.90th=[ 9372], 99.95th=[10945], 00:29:28.111 | 99.99th=[11076] 00:29:28.111 bw ( KiB/s): min=38504, max=39664, per=99.91%, avg=39090.00, stdev=493.08, samples=4 00:29:28.111 iops : min= 9626, max= 9916, avg=9772.50, stdev=123.27, samples=4 00:29:28.111 lat (msec) : 4=0.06%, 10=99.86%, 20=0.08% 00:29:28.111 cpu : usr=99.65%, sys=0.00%, ctx=15, majf=0, minf=4 00:29:28.111 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:28.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:28.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:28.111 issued rwts: total=19588,19612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:28.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:28.111 00:29:28.111 Run status group 0 (all jobs): 00:29:28.111 READ: bw=38.2MiB/s (40.0MB/s), 38.2MiB/s-38.2MiB/s (40.0MB/s-40.0MB/s), io=76.5MiB (80.2MB), run=2005-2005msec 00:29:28.111 WRITE: bw=38.2MiB/s (40.1MB/s), 38.2MiB/s-38.2MiB/s (40.1MB/s-40.1MB/s), io=76.6MiB (80.3MB), run=2005-2005msec 00:29:28.111 10:51:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:28.111 10:51:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:29:28.111 10:51:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:34.672 10:51:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:34.672 10:51:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:38.856 10:51:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:38.856 10:51:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:29:41.386 rmmod nvme_rdma 00:29:41.386 rmmod nvme_fabrics 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2366073 ']' 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2366073 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 2366073 ']' 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 2366073 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2366073 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2366073' 00:29:41.386 killing process with pid 2366073 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 2366073 00:29:41.386 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 2366073 00:29:41.645 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:41.645 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:29:41.645 00:29:41.645 real 0m43.135s 00:29:41.645 user 3m4.744s 00:29:41.645 sys 0m6.000s 00:29:41.645 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:41.645 10:51:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.645 ************************************ 00:29:41.645 END TEST nvmf_fio_host 00:29:41.645 ************************************ 00:29:41.645 10:51:48 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:29:41.645 10:51:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:41.645 10:51:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:41.645 10:51:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.645 ************************************ 00:29:41.645 START TEST nvmf_failover 00:29:41.645 ************************************ 00:29:41.645 10:51:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:29:41.645 * Looking for test storage... 00:29:41.645 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:41.645 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.645 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:41.645 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.645 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.645 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.645 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.645 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.645 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.645 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.645 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.645 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.645 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.645 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:29:41.645 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:29:41.645 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.645 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.645 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.645 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:41.645 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:41.645 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.645 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.645 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:41.646 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.905 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:41.905 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:41.905 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:29:41.905 10:51:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:29:47.173 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:29:47.173 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:29:47.173 Found net devices under 0000:da:00.0: mlx_0_0 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:29:47.173 Found net devices under 0000:da:00.1: mlx_0_1 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # rdma_device_init 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # uname 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # modprobe ib_cm 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@63 -- # modprobe ib_core 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@64 -- # modprobe ib_umad 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe iw_cm 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # allocate_nic_ips 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@73 -- # get_rdma_if_list 00:29:47.173 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:47.174 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:47.174 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:47.174 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:47.174 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:47.174 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:47.174 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:47.174 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:47.174 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:29:47.174 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:29:47.174 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:47.174 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:47.174 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:47.174 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:47.174 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:47.174 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:29:47.174 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:29:47.174 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:47.174 10:51:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:29:47.174 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:47.174 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:29:47.174 altname enp218s0f0np0 00:29:47.174 altname ens818f0np0 00:29:47.174 inet 192.168.100.8/24 scope global mlx_0_0 00:29:47.174 valid_lft forever preferred_lft forever 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:29:47.174 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:47.174 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:29:47.174 altname enp218s0f1np1 00:29:47.174 altname ens818f1np1 00:29:47.174 inet 192.168.100.9/24 scope global mlx_0_1 00:29:47.174 valid_lft forever preferred_lft forever 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@86 -- # get_rdma_if_list 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:29:47.174 192.168.100.9' 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:29:47.174 192.168.100.9' 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # head -n 1 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # tail -n +2 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # head -n 1 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:29:47.174 192.168.100.9' 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2375434 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2375434 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2375434 ']' 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:47.174 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:47.174 [2024-07-24 10:51:54.171450] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:29:47.174 [2024-07-24 10:51:54.171506] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.174 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.174 [2024-07-24 10:51:54.227569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:47.175 [2024-07-24 10:51:54.268538] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.175 [2024-07-24 10:51:54.268579] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.175 [2024-07-24 10:51:54.268586] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:47.175 [2024-07-24 10:51:54.268591] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:47.175 [2024-07-24 10:51:54.268595] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.175 [2024-07-24 10:51:54.268697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.175 [2024-07-24 10:51:54.268784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:47.175 [2024-07-24 10:51:54.268784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.175 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:47.175 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:47.175 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:47.175 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:47.175 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:47.175 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.175 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:47.175 [2024-07-24 10:51:54.573731] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11b5c90/0x11ba140) succeed. 00:29:47.175 [2024-07-24 10:51:54.582687] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11b71e0/0x11fb7d0) succeed. 00:29:47.433 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:47.433 Malloc0 00:29:47.691 10:51:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:47.691 10:51:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:47.949 10:51:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:47.949 [2024-07-24 10:51:55.372135] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:47.949 10:51:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:29:48.208 [2024-07-24 10:51:55.548477] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:29:48.208 10:51:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:29:48.467 [2024-07-24 10:51:55.741178] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:29:48.467 10:51:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2375798 00:29:48.467 10:51:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:48.467 10:51:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:48.467 10:51:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2375798 /var/tmp/bdevperf.sock 00:29:48.467 10:51:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2375798 ']' 00:29:48.467 10:51:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:48.467 10:51:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:48.467 10:51:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:48.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:48.467 10:51:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:48.467 10:51:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:48.725 10:51:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:48.725 10:51:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:48.725 10:51:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:48.983 NVMe0n1 00:29:48.983 10:51:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:49.242 00:29:49.242 10:51:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:49.242 10:51:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2375877 00:29:49.242 10:51:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:50.176 10:51:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:50.433 10:51:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:29:53.723 10:52:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:53.723 00:29:53.723 10:52:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:29:53.723 10:52:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:29:57.008 10:52:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:57.008 [2024-07-24 10:52:04.298732] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:57.008 10:52:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:29:57.943 10:52:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:29:58.201 10:52:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2375877 00:30:04.812 0 00:30:04.812 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2375798 00:30:04.812 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2375798 ']' 00:30:04.812 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2375798 00:30:04.812 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:30:04.812 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:04.812 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2375798 00:30:04.812 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:04.812 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:04.812 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2375798' 00:30:04.812 killing process with pid 2375798 00:30:04.812 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2375798 00:30:04.812 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2375798 00:30:04.812 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:04.812 [2024-07-24 10:51:55.814204] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:30:04.812 [2024-07-24 10:51:55.814260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2375798 ] 00:30:04.812 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.812 [2024-07-24 10:51:55.868303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.812 [2024-07-24 10:51:55.908750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.812 Running I/O for 15 seconds... 00:30:04.812 [2024-07-24 10:51:58.666064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.812 [2024-07-24 10:51:58.666105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.812 [2024-07-24 10:51:58.666121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.812 [2024-07-24 10:51:58.666128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.812 [2024-07-24 10:51:58.666137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.812 [2024-07-24 10:51:58.666144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.812 [2024-07-24 10:51:58.666152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.812 [2024-07-24 10:51:58.666159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.812 [2024-07-24 10:51:58.666166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.812 [2024-07-24 10:51:58.666173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.812 [2024-07-24 10:51:58.666181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.812 [2024-07-24 10:51:58.666187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.812 [2024-07-24 10:51:58.666194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.812 [2024-07-24 10:51:58.666200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.812 [2024-07-24 10:51:58.666208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.812 [2024-07-24 10:51:58.666214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.812 [2024-07-24 10:51:58.666222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.812 [2024-07-24 10:51:58.666228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.812 [2024-07-24 10:51:58.666236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.812 [2024-07-24 10:51:58.666242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.812 [2024-07-24 10:51:58.666250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.812 [2024-07-24 10:51:58.666262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.813 [2024-07-24 10:51:58.666748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.813 [2024-07-24 10:51:58.666755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.666761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.666769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.666775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.666784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.666790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.666798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.666804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.666812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.666818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.666827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.666833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.666840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.666847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.666854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.666861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.666870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.666877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.666885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.666891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.666899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.666905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.666913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.666919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.666927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.666934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.666942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.666948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.666956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.666963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.666971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.666977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.666985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.666991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.666999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.667005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.667013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.667019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.667027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.667033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.667041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.667047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.667055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.667062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.667070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.667076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.667084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.667090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.667098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.667104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.667112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.667118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.667126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.667136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.667144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.667151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.667159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.667165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.667174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.667180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.667188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.667194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.667202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.667208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.667216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.667222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.667230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.667236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.667244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.667250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.667257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.814 [2024-07-24 10:51:58.667264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.814 [2024-07-24 10:51:58.667276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.815 [2024-07-24 10:51:58.667706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.815 [2024-07-24 10:51:58.667713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:51:58.667721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.816 [2024-07-24 10:51:58.667727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:51:58.667736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.816 [2024-07-24 10:51:58.667742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:51:58.667750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.816 [2024-07-24 10:51:58.667756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:51:58.667764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.816 [2024-07-24 10:51:58.667770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:51:58.667777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.816 [2024-07-24 10:51:58.667784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:51:58.667791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.816 [2024-07-24 10:51:58.667797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:51:58.667805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.816 [2024-07-24 10:51:58.667811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:51:58.667818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.816 [2024-07-24 10:51:58.667825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:51:58.667832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.816 [2024-07-24 10:51:58.667839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:51:58.667847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.816 [2024-07-24 10:51:58.667853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:51:58.667862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.816 [2024-07-24 10:51:58.667868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:51:58.667876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.816 [2024-07-24 10:51:58.667882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:51:58.667890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.816 [2024-07-24 10:51:58.667896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:51:58.667904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x184400 00:30:04.816 [2024-07-24 10:51:58.667910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:51:58.667920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x184400 00:30:04.816 [2024-07-24 10:51:58.667927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56373 cdw0:d9cad000 sqhd:ea54 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:51:58.669777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.816 [2024-07-24 10:51:58.669789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.816 [2024-07-24 10:51:58.669795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23568 len:8 PRP1 0x0 PRP2 0x0 00:30:04.816 [2024-07-24 10:51:58.669802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.816 [2024-07-24 10:51:58.669840] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:30:04.816 [2024-07-24 10:51:58.669848] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:30:04.816 [2024-07-24 10:51:58.669859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:04.816 [2024-07-24 10:51:58.672645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:04.816 [2024-07-24 10:51:58.687165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:04.816 [2024-07-24 10:51:58.728936] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:04.816 [2024-07-24 10:52:02.115444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x184400 00:30:04.816 [2024-07-24 10:52:02.115484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:52:02.115501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x184400 00:30:04.816 [2024-07-24 10:52:02.115516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:52:02.115525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x184400 00:30:04.816 [2024-07-24 10:52:02.115532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:52:02.115540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x184400 00:30:04.816 [2024-07-24 10:52:02.115546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:52:02.115554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x184400 00:30:04.816 [2024-07-24 10:52:02.115560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:52:02.115568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x184400 00:30:04.816 [2024-07-24 10:52:02.115575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:52:02.115583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.816 [2024-07-24 10:52:02.115590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:52:02.115597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.816 [2024-07-24 10:52:02.115603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:52:02.115611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x184400 00:30:04.816 [2024-07-24 10:52:02.115619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:52:02.115626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x184400 00:30:04.816 [2024-07-24 10:52:02.115633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:52:02.115640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x184400 00:30:04.816 [2024-07-24 10:52:02.115647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:52:02.115654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x184400 00:30:04.816 [2024-07-24 10:52:02.115662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:52:02.115669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x184400 00:30:04.816 [2024-07-24 10:52:02.115678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:52:02.115686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x184400 00:30:04.816 [2024-07-24 10:52:02.115692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:52:02.115700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.816 [2024-07-24 10:52:02.115708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:52:02.115716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.816 [2024-07-24 10:52:02.115723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:52:02.115731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.816 [2024-07-24 10:52:02.115737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.816 [2024-07-24 10:52:02.115746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.816 [2024-07-24 10:52:02.115752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.115760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.817 [2024-07-24 10:52:02.115766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.115776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.817 [2024-07-24 10:52:02.115783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.115792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.817 [2024-07-24 10:52:02.115799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.115807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.817 [2024-07-24 10:52:02.115813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.115821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x184400 00:30:04.817 [2024-07-24 10:52:02.115830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.115840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x184400 00:30:04.817 [2024-07-24 10:52:02.115848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.115856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.817 [2024-07-24 10:52:02.115865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.115874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.817 [2024-07-24 10:52:02.115881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.115889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.817 [2024-07-24 10:52:02.115895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.115903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.817 [2024-07-24 10:52:02.115909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.115917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.817 [2024-07-24 10:52:02.115923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.115931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.817 [2024-07-24 10:52:02.115937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.115945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.817 [2024-07-24 10:52:02.115952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.115960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.817 [2024-07-24 10:52:02.115966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.115975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x184400 00:30:04.817 [2024-07-24 10:52:02.115981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.115989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x184400 00:30:04.817 [2024-07-24 10:52:02.115995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.116003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x184400 00:30:04.817 [2024-07-24 10:52:02.116010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.116017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x184400 00:30:04.817 [2024-07-24 10:52:02.116023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.116032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x184400 00:30:04.817 [2024-07-24 10:52:02.116039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.116047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x184400 00:30:04.817 [2024-07-24 10:52:02.116053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.116061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x184400 00:30:04.817 [2024-07-24 10:52:02.116067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.116075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x184400 00:30:04.817 [2024-07-24 10:52:02.116081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.116088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x184400 00:30:04.817 [2024-07-24 10:52:02.116095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.116102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x184400 00:30:04.817 [2024-07-24 10:52:02.116108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.116116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x184400 00:30:04.817 [2024-07-24 10:52:02.116124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.116132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x184400 00:30:04.817 [2024-07-24 10:52:02.116138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.116146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x184400 00:30:04.817 [2024-07-24 10:52:02.116152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.817 [2024-07-24 10:52:02.116160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x184400 00:30:04.818 [2024-07-24 10:52:02.116166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x184400 00:30:04.818 [2024-07-24 10:52:02.116181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x184400 00:30:04.818 [2024-07-24 10:52:02.116197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x184400 00:30:04.818 [2024-07-24 10:52:02.116438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:109440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x184400 00:30:04.818 [2024-07-24 10:52:02.116453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x184400 00:30:04.818 [2024-07-24 10:52:02.116468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x184400 00:30:04.818 [2024-07-24 10:52:02.116482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x184400 00:30:04.818 [2024-07-24 10:52:02.116501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x184400 00:30:04.818 [2024-07-24 10:52:02.116515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x184400 00:30:04.818 [2024-07-24 10:52:02.116530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x184400 00:30:04.818 [2024-07-24 10:52:02.116546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.818 [2024-07-24 10:52:02.116658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x184400 00:30:04.818 [2024-07-24 10:52:02.116673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.818 [2024-07-24 10:52:02.116681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x184400 00:30:04.818 [2024-07-24 10:52:02.116687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.116695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x184400 00:30:04.819 [2024-07-24 10:52:02.116701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.116711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x184400 00:30:04.819 [2024-07-24 10:52:02.116718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.116725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x184400 00:30:04.819 [2024-07-24 10:52:02.116732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.116739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x184400 00:30:04.819 [2024-07-24 10:52:02.116746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.116753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x184400 00:30:04.819 [2024-07-24 10:52:02.116760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.116768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x184400 00:30:04.819 [2024-07-24 10:52:02.116775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.116782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.819 [2024-07-24 10:52:02.116788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.116796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.819 [2024-07-24 10:52:02.116802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.116809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.819 [2024-07-24 10:52:02.116815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.116824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.819 [2024-07-24 10:52:02.116830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.116838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.819 [2024-07-24 10:52:02.116844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.116852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.819 [2024-07-24 10:52:02.116858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.116866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.819 [2024-07-24 10:52:02.116872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.116881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.819 [2024-07-24 10:52:02.116888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.116896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.819 [2024-07-24 10:52:02.116902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.116910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.819 [2024-07-24 10:52:02.116916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.116923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.819 [2024-07-24 10:52:02.116930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.116938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.819 [2024-07-24 10:52:02.116944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.116952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.819 [2024-07-24 10:52:02.116958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.116966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.819 [2024-07-24 10:52:02.116972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.116979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.819 [2024-07-24 10:52:02.116986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.116994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.819 [2024-07-24 10:52:02.117000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.117008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.819 [2024-07-24 10:52:02.117014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.117022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.819 [2024-07-24 10:52:02.117028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.117036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.819 [2024-07-24 10:52:02.117043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.117052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.819 [2024-07-24 10:52:02.117058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.117066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.819 [2024-07-24 10:52:02.117072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.117079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.819 [2024-07-24 10:52:02.117085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.117093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x184400 00:30:04.819 [2024-07-24 10:52:02.117100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.117108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x184400 00:30:04.819 [2024-07-24 10:52:02.117115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.117123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x184400 00:30:04.819 [2024-07-24 10:52:02.117129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.117137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x184400 00:30:04.819 [2024-07-24 10:52:02.117143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.117152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x184400 00:30:04.819 [2024-07-24 10:52:02.117159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.117167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x184400 00:30:04.819 [2024-07-24 10:52:02.117174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.819 [2024-07-24 10:52:02.117181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:02.117188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:02.117195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:02.117201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:02.117209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:02.117217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:02.117225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:02.117232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:02.117239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:02.117246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:02.117253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:02.117259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:02.117267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:02.117274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:02.117282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:02.117288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:02.117296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:02.117302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:02.117310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:02.117316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:02.117327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.820 [2024-07-24 10:52:02.117333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56375 cdw0:d9cad000 sqhd:8384 p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:02.119120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.820 [2024-07-24 10:52:02.119134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.820 [2024-07-24 10:52:02.119140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110208 len:8 PRP1 0x0 PRP2 0x0 00:30:04.820 [2024-07-24 10:52:02.119147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:02.119183] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:30:04.820 [2024-07-24 10:52:02.119192] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:30:04.820 [2024-07-24 10:52:02.119199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:04.820 [2024-07-24 10:52:02.122006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:04.820 [2024-07-24 10:52:02.136675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:04.820 [2024-07-24 10:52:02.181799] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:04.820 [2024-07-24 10:52:06.495627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.820 [2024-07-24 10:52:06.495668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:06.495683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.820 [2024-07-24 10:52:06.495690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:06.495698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.820 [2024-07-24 10:52:06.495705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:06.495713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.820 [2024-07-24 10:52:06.495719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:06.495727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.820 [2024-07-24 10:52:06.495734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:06.495742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.820 [2024-07-24 10:52:06.495748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:06.495756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:06.495764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:06.495772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:06.495779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:06.495788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:06.495794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:06.495802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:06.495809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:06.495817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:06.495823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:06.495836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:06.495843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:06.495852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:06.495858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:06.495867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:06.495874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:06.495882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:06.495889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:06.495899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:06.495906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:06.495915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:06.495923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:06.495931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:06.495938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:06.495946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:06.495952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:06.495960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:06.495968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:06.495979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:06.495986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.820 [2024-07-24 10:52:06.495995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x184400 00:30:04.820 [2024-07-24 10:52:06.496002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.821 [2024-07-24 10:52:06.496266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.821 [2024-07-24 10:52:06.496279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.821 [2024-07-24 10:52:06.496294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.821 [2024-07-24 10:52:06.496309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.821 [2024-07-24 10:52:06.496322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x184400 00:30:04.821 [2024-07-24 10:52:06.496552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.821 [2024-07-24 10:52:06.496560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x184400 00:30:04.822 [2024-07-24 10:52:06.496567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x184400 00:30:04.822 [2024-07-24 10:52:06.496581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x184400 00:30:04.822 [2024-07-24 10:52:06.496596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x184400 00:30:04.822 [2024-07-24 10:52:06.496611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x184400 00:30:04.822 [2024-07-24 10:52:06.496625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x184400 00:30:04.822 [2024-07-24 10:52:06.496639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x184400 00:30:04.822 [2024-07-24 10:52:06.496654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x184400 00:30:04.822 [2024-07-24 10:52:06.496669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x184400 00:30:04.822 [2024-07-24 10:52:06.496683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.822 [2024-07-24 10:52:06.496699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.822 [2024-07-24 10:52:06.496712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.822 [2024-07-24 10:52:06.496729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x184400 00:30:04.822 [2024-07-24 10:52:06.496744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x184400 00:30:04.822 [2024-07-24 10:52:06.496758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x184400 00:30:04.822 [2024-07-24 10:52:06.496773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x184400 00:30:04.822 [2024-07-24 10:52:06.496787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x184400 00:30:04.822 [2024-07-24 10:52:06.496802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x184400 00:30:04.822 [2024-07-24 10:52:06.496816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x184400 00:30:04.822 [2024-07-24 10:52:06.496830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x184400 00:30:04.822 [2024-07-24 10:52:06.496844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.822 [2024-07-24 10:52:06.496859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.822 [2024-07-24 10:52:06.496877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.822 [2024-07-24 10:52:06.496892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.822 [2024-07-24 10:52:06.496907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.822 [2024-07-24 10:52:06.496921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.822 [2024-07-24 10:52:06.496936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.822 [2024-07-24 10:52:06.496950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.822 [2024-07-24 10:52:06.496965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.822 [2024-07-24 10:52:06.496979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.822 [2024-07-24 10:52:06.496987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.823 [2024-07-24 10:52:06.496993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.823 [2024-07-24 10:52:06.497008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.823 [2024-07-24 10:52:06.497022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.823 [2024-07-24 10:52:06.497038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.823 [2024-07-24 10:52:06.497053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.823 [2024-07-24 10:52:06.497068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.823 [2024-07-24 10:52:06.497082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:76216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.497479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.497485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.506572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.506582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.506591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.506598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.823 [2024-07-24 10:52:06.506606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x184400 00:30:04.823 [2024-07-24 10:52:06.506613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.824 [2024-07-24 10:52:06.506621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x184400 00:30:04.824 [2024-07-24 10:52:06.506627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.824 [2024-07-24 10:52:06.506637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x184400 00:30:04.824 [2024-07-24 10:52:06.506644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.824 [2024-07-24 10:52:06.506652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x184400 00:30:04.824 [2024-07-24 10:52:06.506658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56377 cdw0:d9cad000 sqhd:560c p:1 m:0 dnr:0 00:30:04.824 [2024-07-24 10:52:06.508527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.824 [2024-07-24 10:52:06.508560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.824 [2024-07-24 10:52:06.508567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76360 len:8 PRP1 0x0 PRP2 0x0 00:30:04.824 [2024-07-24 10:52:06.508574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.824 [2024-07-24 10:52:06.508611] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:30:04.824 [2024-07-24 10:52:06.508621] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:30:04.824 [2024-07-24 10:52:06.508629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:04.824 [2024-07-24 10:52:06.508658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.824 [2024-07-24 10:52:06.508667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56377 cdw0:0 sqhd:00a6 p:1 m:0 dnr:0 00:30:04.824 [2024-07-24 10:52:06.508675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.824 [2024-07-24 10:52:06.508681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56377 cdw0:0 sqhd:00a6 p:1 m:0 dnr:0 00:30:04.824 [2024-07-24 10:52:06.508689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.824 [2024-07-24 10:52:06.508696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56377 cdw0:0 sqhd:00a6 p:1 m:0 dnr:0 00:30:04.824 [2024-07-24 10:52:06.508703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.824 [2024-07-24 10:52:06.508710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56377 cdw0:0 sqhd:00a6 p:1 m:0 dnr:0 00:30:04.824 [2024-07-24 10:52:06.525441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:04.824 [2024-07-24 10:52:06.525462] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:30:04.824 [2024-07-24 10:52:06.525470] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:04.824 [2024-07-24 10:52:06.528266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:04.824 [2024-07-24 10:52:06.574686] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:04.824 00:30:04.824 Latency(us) 00:30:04.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.824 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:04.824 Verification LBA range: start 0x0 length 0x4000 00:30:04.824 NVMe0n1 : 15.01 14132.20 55.20 312.20 0.00 8839.08 335.48 1046578.71 00:30:04.824 =================================================================================================================== 00:30:04.824 Total : 14132.20 55.20 312.20 0.00 8839.08 335.48 1046578.71 00:30:04.824 Received shutdown signal, test time was about 15.000000 seconds 00:30:04.824 00:30:04.824 Latency(us) 00:30:04.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.824 =================================================================================================================== 00:30:04.824 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:04.824 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:04.824 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:04.824 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:04.824 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2378407 00:30:04.824 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:04.824 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2378407 /var/tmp/bdevperf.sock 00:30:04.824 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2378407 ']' 00:30:04.824 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:04.824 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:04.824 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:04.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:04.824 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:04.824 10:52:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:04.824 10:52:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:04.824 10:52:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:30:04.824 10:52:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:30:05.082 [2024-07-24 10:52:12.268061] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:30:05.082 10:52:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:30:05.082 [2024-07-24 10:52:12.436652] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:30:05.082 10:52:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:05.341 NVMe0n1 00:30:05.341 10:52:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:05.599 00:30:05.599 10:52:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:05.857 00:30:05.857 10:52:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:05.857 10:52:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:06.114 10:52:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:06.114 10:52:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:09.397 10:52:16 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:09.397 10:52:16 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:09.397 10:52:16 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:09.397 10:52:16 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2379107 00:30:09.397 10:52:16 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2379107 00:30:10.773 0 00:30:10.773 10:52:17 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:10.773 [2024-07-24 10:52:11.931028] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:30:10.773 [2024-07-24 10:52:11.931076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2378407 ] 00:30:10.773 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.773 [2024-07-24 10:52:11.986616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.773 [2024-07-24 10:52:12.023904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.773 [2024-07-24 10:52:13.537984] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:30:10.773 [2024-07-24 10:52:13.538590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.773 [2024-07-24 10:52:13.538620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.773 [2024-07-24 10:52:13.564281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:10.773 [2024-07-24 10:52:13.578880] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:10.773 Running I/O for 1 seconds... 00:30:10.773 00:30:10.773 Latency(us) 00:30:10.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.773 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:10.773 Verification LBA range: start 0x0 length 0x4000 00:30:10.773 NVMe0n1 : 1.00 17873.44 69.82 0.00 0.00 7116.04 1092.27 13356.86 00:30:10.773 =================================================================================================================== 00:30:10.773 Total : 17873.44 69.82 0.00 0.00 7116.04 1092.27 13356.86 00:30:10.773 10:52:17 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:10.773 10:52:17 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:10.773 10:52:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:10.773 10:52:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:10.773 10:52:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:11.031 10:52:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:11.289 10:52:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:14.574 10:52:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:14.574 10:52:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:14.574 10:52:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2378407 00:30:14.574 10:52:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2378407 ']' 00:30:14.574 10:52:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2378407 00:30:14.574 10:52:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:30:14.574 10:52:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:14.574 10:52:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2378407 00:30:14.574 10:52:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:14.574 10:52:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:14.574 10:52:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2378407' 00:30:14.574 killing process with pid 2378407 00:30:14.574 10:52:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2378407 00:30:14.574 10:52:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2378407 00:30:14.574 10:52:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:14.574 10:52:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:30:14.833 rmmod nvme_rdma 00:30:14.833 rmmod nvme_fabrics 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2375434 ']' 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2375434 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2375434 ']' 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2375434 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2375434 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2375434' 00:30:14.833 killing process with pid 2375434 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2375434 00:30:14.833 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2375434 00:30:15.092 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:15.092 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:30:15.092 00:30:15.092 real 0m33.523s 00:30:15.092 user 1m55.730s 00:30:15.092 sys 0m5.591s 00:30:15.092 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:15.092 10:52:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:15.092 ************************************ 00:30:15.092 END TEST nvmf_failover 00:30:15.092 ************************************ 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.351 ************************************ 00:30:15.351 START TEST nvmf_host_discovery 00:30:15.351 ************************************ 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:30:15.351 * Looking for test storage... 00:30:15.351 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:30:15.351 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:30:15.351 00:30:15.351 real 0m0.122s 00:30:15.351 user 0m0.061s 00:30:15.351 sys 0m0.070s 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.351 ************************************ 00:30:15.351 END TEST nvmf_host_discovery 00:30:15.351 ************************************ 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.351 ************************************ 00:30:15.351 START TEST nvmf_host_multipath_status 00:30:15.351 ************************************ 00:30:15.351 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:30:15.611 * Looking for test storage... 00:30:15.611 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:30:15.611 10:52:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:30:20.881 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:30:20.881 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:30:20.881 Found net devices under 0000:da:00.0: mlx_0_0 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:30:20.881 Found net devices under 0000:da:00.1: mlx_0_1 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # rdma_device_init 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # uname 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # modprobe ib_cm 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@63 -- # modprobe ib_core 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@64 -- # modprobe ib_umad 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe iw_cm 00:30:20.881 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # allocate_nic_ips 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # get_rdma_if_list 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:30:20.882 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:20.882 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:30:20.882 altname enp218s0f0np0 00:30:20.882 altname ens818f0np0 00:30:20.882 inet 192.168.100.8/24 scope global mlx_0_0 00:30:20.882 valid_lft forever preferred_lft forever 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:30:20.882 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:20.882 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:30:20.882 altname enp218s0f1np1 00:30:20.882 altname ens818f1np1 00:30:20.882 inet 192.168.100.9/24 scope global mlx_0_1 00:30:20.882 valid_lft forever preferred_lft forever 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # get_rdma_if_list 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:30:20.882 192.168.100.9' 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:30:20.882 192.168.100.9' 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # head -n 1 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # head -n 1 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:30:20.882 192.168.100.9' 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # tail -n +2 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2383132 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2383132 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2383132 ']' 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:20.882 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:20.883 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.883 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:20.883 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:20.883 [2024-07-24 10:52:27.726307] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:30:20.883 [2024-07-24 10:52:27.726350] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:20.883 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.883 [2024-07-24 10:52:27.779398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:20.883 [2024-07-24 10:52:27.820113] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:20.883 [2024-07-24 10:52:27.820151] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:20.883 [2024-07-24 10:52:27.820162] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:20.883 [2024-07-24 10:52:27.820168] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:20.883 [2024-07-24 10:52:27.820173] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:20.883 [2024-07-24 10:52:27.820220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:20.883 [2024-07-24 10:52:27.820223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.883 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:20.883 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:30:20.883 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:20.883 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:20.883 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:20.883 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:20.883 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2383132 00:30:20.883 10:52:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:20.883 [2024-07-24 10:52:28.111574] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7ede60/0x7f2310) succeed. 00:30:20.883 [2024-07-24 10:52:28.120530] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7ef310/0x8339a0) succeed. 00:30:20.883 10:52:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:21.141 Malloc0 00:30:21.141 10:52:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:21.141 10:52:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:21.399 10:52:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:21.658 [2024-07-24 10:52:28.861559] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:21.658 10:52:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:30:21.658 [2024-07-24 10:52:29.037828] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:30:21.658 10:52:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2383379 00:30:21.658 10:52:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:21.658 10:52:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:21.658 10:52:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2383379 /var/tmp/bdevperf.sock 00:30:21.658 10:52:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2383379 ']' 00:30:21.658 10:52:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:21.658 10:52:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:21.658 10:52:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:21.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:21.658 10:52:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:21.658 10:52:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:21.917 10:52:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:21.917 10:52:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:30:21.917 10:52:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:22.176 10:52:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:30:22.435 Nvme0n1 00:30:22.435 10:52:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:22.694 Nvme0n1 00:30:22.694 10:52:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:22.694 10:52:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:24.596 10:52:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:24.596 10:52:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:30:24.854 10:52:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:30:24.854 10:52:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:26.229 10:52:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:26.229 10:52:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:26.229 10:52:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.229 10:52:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:26.229 10:52:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:26.229 10:52:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:26.229 10:52:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:26.229 10:52:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.229 10:52:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:26.229 10:52:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:26.229 10:52:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.229 10:52:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:26.487 10:52:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:26.487 10:52:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:26.487 10:52:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.487 10:52:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:26.746 10:52:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:26.746 10:52:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:26.746 10:52:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.746 10:52:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:26.746 10:52:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:26.746 10:52:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:26.746 10:52:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.746 10:52:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:27.005 10:52:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:27.005 10:52:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:27.005 10:52:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:30:27.263 10:52:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:30:27.263 10:52:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:28.668 10:52:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:28.668 10:52:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:28.668 10:52:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.668 10:52:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:28.668 10:52:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:28.668 10:52:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:28.668 10:52:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.668 10:52:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:28.668 10:52:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:28.668 10:52:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:28.668 10:52:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.668 10:52:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:28.926 10:52:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:28.926 10:52:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:28.926 10:52:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:28.926 10:52:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:29.185 10:52:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:29.185 10:52:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:29.185 10:52:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:29.185 10:52:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:29.185 10:52:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:29.185 10:52:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:29.185 10:52:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:29.185 10:52:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:29.444 10:52:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:29.444 10:52:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:29.444 10:52:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:30:29.702 10:52:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:30:29.702 10:52:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:31.078 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:31.078 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:31.078 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.078 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:31.078 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.078 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:31.078 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.078 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:31.078 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:31.078 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:31.078 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.078 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:31.335 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.335 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:31.335 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.335 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:31.335 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.335 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:31.335 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.335 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:31.593 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.593 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:31.594 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.594 10:52:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:31.852 10:52:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.852 10:52:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:31.852 10:52:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:30:32.111 10:52:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:30:32.111 10:52:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:33.048 10:52:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:33.048 10:52:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:33.048 10:52:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.048 10:52:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:33.306 10:52:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.306 10:52:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:33.306 10:52:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.306 10:52:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:33.565 10:52:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:33.565 10:52:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:33.565 10:52:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.565 10:52:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:33.824 10:52:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.824 10:52:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:33.824 10:52:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.824 10:52:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:33.824 10:52:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.824 10:52:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:33.824 10:52:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.824 10:52:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:34.083 10:52:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.083 10:52:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:34.083 10:52:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.083 10:52:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:34.341 10:52:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:34.341 10:52:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:34.341 10:52:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:30:34.341 10:52:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:30:34.600 10:52:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:35.535 10:52:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:35.535 10:52:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:35.535 10:52:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:35.535 10:52:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:35.794 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:35.794 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:35.794 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:35.794 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:36.054 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:36.054 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:36.054 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.054 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:36.054 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:36.054 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:36.054 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.054 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:36.313 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:36.313 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:36.313 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.313 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:36.572 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:36.572 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:36.572 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.572 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:36.572 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:36.572 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:36.572 10:52:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:30:36.831 10:52:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:30:36.831 10:52:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:30:38.206 10:52:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:30:38.206 10:52:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:38.206 10:52:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:38.206 10:52:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:38.206 10:52:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:38.206 10:52:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:38.206 10:52:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:38.206 10:52:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:38.206 10:52:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:38.206 10:52:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:38.206 10:52:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:38.206 10:52:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:38.465 10:52:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:38.465 10:52:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:38.465 10:52:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:38.465 10:52:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:38.724 10:52:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:38.724 10:52:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:38.724 10:52:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:38.724 10:52:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:38.724 10:52:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:38.724 10:52:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:38.724 10:52:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:38.724 10:52:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:38.982 10:52:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:38.982 10:52:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:30:39.241 10:52:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:30:39.241 10:52:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:30:39.241 10:52:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:30:39.499 10:52:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:30:40.435 10:52:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:30:40.435 10:52:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:40.435 10:52:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.435 10:52:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:40.694 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.694 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:40.694 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.694 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:40.952 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.952 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:40.952 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.952 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:40.952 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.952 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:40.952 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.952 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:41.210 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:41.210 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:41.210 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:41.210 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:41.468 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:41.468 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:41.468 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:41.468 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:41.468 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:41.468 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:30:41.468 10:52:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:30:41.726 10:52:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:30:41.984 10:52:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:30:43.061 10:52:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:30:43.061 10:52:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:43.061 10:52:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.061 10:52:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:43.061 10:52:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:43.061 10:52:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:43.061 10:52:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.061 10:52:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:43.320 10:52:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.320 10:52:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:43.320 10:52:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.320 10:52:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:43.320 10:52:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.320 10:52:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:43.320 10:52:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.320 10:52:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:43.579 10:52:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.579 10:52:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:43.579 10:52:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.579 10:52:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:43.837 10:52:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.837 10:52:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:43.837 10:52:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.837 10:52:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:43.837 10:52:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.837 10:52:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:30:43.837 10:52:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:30:44.096 10:52:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:30:44.354 10:52:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:30:45.290 10:52:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:30:45.290 10:52:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:45.290 10:52:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:45.290 10:52:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:45.549 10:52:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:45.549 10:52:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:45.549 10:52:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:45.549 10:52:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:45.549 10:52:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:45.549 10:52:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:45.549 10:52:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:45.549 10:52:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:45.807 10:52:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:45.808 10:52:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:45.808 10:52:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:45.808 10:52:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:46.065 10:52:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:46.065 10:52:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:46.065 10:52:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.065 10:52:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:46.065 10:52:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:46.065 10:52:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:46.065 10:52:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.065 10:52:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:46.323 10:52:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:46.323 10:52:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:30:46.323 10:52:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:30:46.582 10:52:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:30:46.582 10:52:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:30:47.971 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:30:47.971 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:47.971 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.971 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:47.971 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:47.971 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:47.971 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.971 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:47.971 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:47.971 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:47.971 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.971 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:48.230 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:48.230 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:48.230 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:48.230 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:48.489 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:48.489 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:48.489 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:48.489 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:48.489 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:48.489 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:48.489 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:48.489 10:52:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:48.748 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:48.748 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2383379 00:30:48.748 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2383379 ']' 00:30:48.748 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2383379 00:30:48.748 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:30:48.748 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:48.748 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2383379 00:30:48.748 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:30:48.748 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:30:48.748 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2383379' 00:30:48.748 killing process with pid 2383379 00:30:48.748 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2383379 00:30:48.748 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2383379 00:30:48.748 Connection closed with partial response: 00:30:48.748 00:30:48.748 00:30:49.011 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2383379 00:30:49.011 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:49.011 [2024-07-24 10:52:29.093148] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:30:49.011 [2024-07-24 10:52:29.093196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383379 ] 00:30:49.011 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.011 [2024-07-24 10:52:29.141947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.011 [2024-07-24 10:52:29.182072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:49.011 Running I/O for 90 seconds... 00:30:49.011 [2024-07-24 10:52:41.701563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.701602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.701632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.701641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.701651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.701659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.701668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.701675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.701687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.701694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.701703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.701711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.701721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.701728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.701737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.701756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.701766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.701774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.701783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.701795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.701805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.701811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.701820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.701827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.701838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.701844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.701853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.701860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.701869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.701877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.701886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.701894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.701903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.701911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.701920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.701928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.701937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.701943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.701952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.701958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.701967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.701973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.701983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.701989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.701998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.702004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.702013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.702019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.702028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.702034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.702043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.702048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.702057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.702063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.702072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.702078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.702087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.702093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.702101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.702107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.702115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.702121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.702130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.702138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.702147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.702154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:49.012 [2024-07-24 10:52:41.702164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183400 00:30:49.012 [2024-07-24 10:52:41.702171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:49.013 [2024-07-24 10:52:41.702703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183400 00:30:49.013 [2024-07-24 10:52:41.702710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.702721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.702728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.702737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.702743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.702753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.702760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.702769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.702775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.702784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.702791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.702800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.702807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.702816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.702823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.702832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.702838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.702847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.702854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.702863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.702869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.702880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.702888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.702897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.702904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.702913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.702920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.702929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.702935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.702944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.702951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.702960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.702966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.702976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.702983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.702992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.702999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.703008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.703014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.703024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.703030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.703040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.703046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.703055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.703063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.703073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.703079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.703088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.703095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.703104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183400 00:30:49.014 [2024-07-24 10:52:41.703111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.703120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.014 [2024-07-24 10:52:41.703126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.703135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.014 [2024-07-24 10:52:41.703142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.703151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.014 [2024-07-24 10:52:41.703158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.703167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.014 [2024-07-24 10:52:41.703174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.703183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.014 [2024-07-24 10:52:41.703189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.703199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.014 [2024-07-24 10:52:41.703205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.703496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.014 [2024-07-24 10:52:41.703508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.703524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.014 [2024-07-24 10:52:41.703531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.703889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.014 [2024-07-24 10:52:41.703899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.703916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.014 [2024-07-24 10:52:41.703926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.703943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.014 [2024-07-24 10:52:41.703950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:49.014 [2024-07-24 10:52:41.703967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.703976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.703993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:41.704618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:41.704627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:53.994449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x183400 00:30:49.015 [2024-07-24 10:52:53.994483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:53.994506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x183400 00:30:49.015 [2024-07-24 10:52:53.994514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:53.994524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183400 00:30:49.015 [2024-07-24 10:52:53.994530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:53.994540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x183400 00:30:49.015 [2024-07-24 10:52:53.994546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:53.994555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:53.994562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:53.994572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183400 00:30:49.015 [2024-07-24 10:52:53.994579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:53.994588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.015 [2024-07-24 10:52:53.994594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:53.994603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x183400 00:30:49.015 [2024-07-24 10:52:53.994609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:53.994619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183400 00:30:49.015 [2024-07-24 10:52:53.994625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:53.994640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183400 00:30:49.015 [2024-07-24 10:52:53.994647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:53.994656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x183400 00:30:49.015 [2024-07-24 10:52:53.994663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:53.994672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x183400 00:30:49.015 [2024-07-24 10:52:53.994679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:49.015 [2024-07-24 10:52:53.994688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183400 00:30:49.015 [2024-07-24 10:52:53.994694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.994703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x183400 00:30:49.016 [2024-07-24 10:52:53.994711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.016 [2024-07-24 10:52:53.995096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.016 [2024-07-24 10:52:53.995113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x183400 00:30:49.016 [2024-07-24 10:52:53.995130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x183400 00:30:49.016 [2024-07-24 10:52:53.995146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183400 00:30:49.016 [2024-07-24 10:52:53.995162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183400 00:30:49.016 [2024-07-24 10:52:53.995177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183400 00:30:49.016 [2024-07-24 10:52:53.995198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x183400 00:30:49.016 [2024-07-24 10:52:53.995215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x183400 00:30:49.016 [2024-07-24 10:52:53.995230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.016 [2024-07-24 10:52:53.995245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.016 [2024-07-24 10:52:53.995261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.016 [2024-07-24 10:52:53.995276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.016 [2024-07-24 10:52:53.995291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.016 [2024-07-24 10:52:53.995307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x183400 00:30:49.016 [2024-07-24 10:52:53.995323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.016 [2024-07-24 10:52:53.995338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x183400 00:30:49.016 [2024-07-24 10:52:53.995353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.016 [2024-07-24 10:52:53.995368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183400 00:30:49.016 [2024-07-24 10:52:53.995386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.016 [2024-07-24 10:52:53.995403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.016 [2024-07-24 10:52:53.995419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.016 [2024-07-24 10:52:53.995435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x183400 00:30:49.016 [2024-07-24 10:52:53.995450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.016 [2024-07-24 10:52:53.995466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:49.016 [2024-07-24 10:52:53.995474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.016 [2024-07-24 10:52:53.995481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.995502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.995534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.017 [2024-07-24 10:52:53.995550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.995567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.017 [2024-07-24 10:52:53.995582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.995601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.995616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.995632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.017 [2024-07-24 10:52:53.995648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.017 [2024-07-24 10:52:53.995665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.995681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.995698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.017 [2024-07-24 10:52:53.995714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.017 [2024-07-24 10:52:53.995729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.995746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.017 [2024-07-24 10:52:53.995762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.017 [2024-07-24 10:52:53.995779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.017 [2024-07-24 10:52:53.995795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.017 [2024-07-24 10:52:53.995811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.995827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.995843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.017 [2024-07-24 10:52:53.995859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.017 [2024-07-24 10:52:53.995875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.995891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.995906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.995923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.995940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.995957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.995974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.995983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.995991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.996001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.996007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.996017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.996023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.997625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.017 [2024-07-24 10:52:53.997643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.997657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.997664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.997674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.997681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.997701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183400 00:30:49.017 [2024-07-24 10:52:53.997709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:49.017 [2024-07-24 10:52:53.998021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.018 [2024-07-24 10:52:53.998030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183400 00:30:49.018 [2024-07-24 10:52:53.998046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183400 00:30:49.018 [2024-07-24 10:52:53.998062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.018 [2024-07-24 10:52:53.998081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183400 00:30:49.018 [2024-07-24 10:52:53.998097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183400 00:30:49.018 [2024-07-24 10:52:53.998113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.018 [2024-07-24 10:52:53.998129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.018 [2024-07-24 10:52:53.998144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.018 [2024-07-24 10:52:53.998160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.018 [2024-07-24 10:52:53.998176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.018 [2024-07-24 10:52:53.998191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.018 [2024-07-24 10:52:53.998206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.018 [2024-07-24 10:52:53.998221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.018 [2024-07-24 10:52:53.998237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.018 [2024-07-24 10:52:53.998252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x183400 00:30:49.018 [2024-07-24 10:52:53.998269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x183400 00:30:49.018 [2024-07-24 10:52:53.998286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.018 [2024-07-24 10:52:53.998302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x183400 00:30:49.018 [2024-07-24 10:52:53.998319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183400 00:30:49.018 [2024-07-24 10:52:53.998334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x183400 00:30:49.018 [2024-07-24 10:52:53.998351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.018 [2024-07-24 10:52:53.998367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.018 [2024-07-24 10:52:53.998382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x183400 00:30:49.018 [2024-07-24 10:52:53.998397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183400 00:30:49.018 [2024-07-24 10:52:53.998413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183400 00:30:49.018 [2024-07-24 10:52:53.998429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.018 [2024-07-24 10:52:53.998444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.018 [2024-07-24 10:52:53.998461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183400 00:30:49.018 [2024-07-24 10:52:53.998477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.018 [2024-07-24 10:52:53.998497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.018 [2024-07-24 10:52:53.998514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.018 [2024-07-24 10:52:53.998713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x183400 00:30:49.018 [2024-07-24 10:52:53.998730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183400 00:30:49.018 [2024-07-24 10:52:53.998746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183400 00:30:49.018 [2024-07-24 10:52:53.998762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x183400 00:30:49.018 [2024-07-24 10:52:53.998778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:49.018 [2024-07-24 10:52:53.998787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.019 [2024-07-24 10:52:53.998793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.998802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.019 [2024-07-24 10:52:53.998809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.998818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x183400 00:30:49.019 [2024-07-24 10:52:53.998825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.998836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x183400 00:30:49.019 [2024-07-24 10:52:53.998843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.998852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183400 00:30:49.019 [2024-07-24 10:52:53.998859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.998868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.019 [2024-07-24 10:52:53.998875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.998884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x183400 00:30:49.019 [2024-07-24 10:52:53.998891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.998899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.019 [2024-07-24 10:52:53.998905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.998916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183400 00:30:49.019 [2024-07-24 10:52:53.998923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.998932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183400 00:30:49.019 [2024-07-24 10:52:53.998939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.998948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x183400 00:30:49.019 [2024-07-24 10:52:53.998955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.998964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183400 00:30:49.019 [2024-07-24 10:52:53.998973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.998982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.019 [2024-07-24 10:52:53.998989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.998998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183400 00:30:49.019 [2024-07-24 10:52:53.999004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.999013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.019 [2024-07-24 10:52:53.999021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.999031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.019 [2024-07-24 10:52:53.999037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.999046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.019 [2024-07-24 10:52:53.999053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.999062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x183400 00:30:49.019 [2024-07-24 10:52:53.999068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.999077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.019 [2024-07-24 10:52:53.999083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.999093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x183400 00:30:49.019 [2024-07-24 10:52:53.999100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.999108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x183400 00:30:49.019 [2024-07-24 10:52:53.999115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.999124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183400 00:30:49.019 [2024-07-24 10:52:53.999131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.999141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183400 00:30:49.019 [2024-07-24 10:52:53.999147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.999156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x183400 00:30:49.019 [2024-07-24 10:52:53.999163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.999173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x183400 00:30:49.019 [2024-07-24 10:52:53.999180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:53.999189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183400 00:30:49.019 [2024-07-24 10:52:53.999196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:54.000781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.019 [2024-07-24 10:52:54.000798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:54.000810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183400 00:30:49.019 [2024-07-24 10:52:54.000818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:54.000828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183400 00:30:49.019 [2024-07-24 10:52:54.000835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:54.000844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.019 [2024-07-24 10:52:54.000850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:54.000860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.019 [2024-07-24 10:52:54.000867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:54.000876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.019 [2024-07-24 10:52:54.000883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:54.000892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.019 [2024-07-24 10:52:54.000898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:54.000907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.019 [2024-07-24 10:52:54.000913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:54.000922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x183400 00:30:49.019 [2024-07-24 10:52:54.000929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:54.000939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x183400 00:30:49.019 [2024-07-24 10:52:54.000945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:49.019 [2024-07-24 10:52:54.000954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x183400 00:30:49.019 [2024-07-24 10:52:54.000961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.000970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.020 [2024-07-24 10:52:54.000977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.000990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183400 00:30:49.020 [2024-07-24 10:52:54.000996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.020 [2024-07-24 10:52:54.001012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183400 00:30:49.020 [2024-07-24 10:52:54.001029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.020 [2024-07-24 10:52:54.001437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183400 00:30:49.020 [2024-07-24 10:52:54.001454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.020 [2024-07-24 10:52:54.001471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183400 00:30:49.020 [2024-07-24 10:52:54.001487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.020 [2024-07-24 10:52:54.001510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.020 [2024-07-24 10:52:54.001525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.020 [2024-07-24 10:52:54.001544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x183400 00:30:49.020 [2024-07-24 10:52:54.001560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.020 [2024-07-24 10:52:54.001579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.020 [2024-07-24 10:52:54.001594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.020 [2024-07-24 10:52:54.001611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.020 [2024-07-24 10:52:54.001626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x183400 00:30:49.020 [2024-07-24 10:52:54.001642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.020 [2024-07-24 10:52:54.001659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.020 [2024-07-24 10:52:54.001675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x183400 00:30:49.020 [2024-07-24 10:52:54.001693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183400 00:30:49.020 [2024-07-24 10:52:54.001709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x183400 00:30:49.020 [2024-07-24 10:52:54.001726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183400 00:30:49.020 [2024-07-24 10:52:54.001752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.020 [2024-07-24 10:52:54.001768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x183400 00:30:49.020 [2024-07-24 10:52:54.001786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x183400 00:30:49.020 [2024-07-24 10:52:54.001801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x183400 00:30:49.020 [2024-07-24 10:52:54.001817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x183400 00:30:49.020 [2024-07-24 10:52:54.001834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.020 [2024-07-24 10:52:54.001849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x183400 00:30:49.020 [2024-07-24 10:52:54.001864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.001873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183400 00:30:49.020 [2024-07-24 10:52:54.001880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.009243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183400 00:30:49.020 [2024-07-24 10:52:54.009253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.009264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x183400 00:30:49.020 [2024-07-24 10:52:54.009271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.009281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.020 [2024-07-24 10:52:54.009288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.009297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.020 [2024-07-24 10:52:54.009304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.009315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.020 [2024-07-24 10:52:54.009323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.009338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x183400 00:30:49.020 [2024-07-24 10:52:54.009346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:49.020 [2024-07-24 10:52:54.009355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x183400 00:30:49.021 [2024-07-24 10:52:54.009362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:49.021 [2024-07-24 10:52:54.009372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x183400 00:30:49.021 [2024-07-24 10:52:54.009379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:49.021 [2024-07-24 10:52:54.009389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183400 00:30:49.021 [2024-07-24 10:52:54.009396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:49.021 [2024-07-24 10:52:54.009404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.021 [2024-07-24 10:52:54.009411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:49.021 [2024-07-24 10:52:54.009421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x183400 00:30:49.021 [2024-07-24 10:52:54.009428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:49.021 [2024-07-24 10:52:54.009437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183400 00:30:49.021 [2024-07-24 10:52:54.009444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.009453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x183400 00:30:49.022 [2024-07-24 10:52:54.009460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.009469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183400 00:30:49.022 [2024-07-24 10:52:54.009476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.009486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x183400 00:30:49.022 [2024-07-24 10:52:54.009496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.009506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.022 [2024-07-24 10:52:54.009513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.009522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.022 [2024-07-24 10:52:54.009533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.009542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.022 [2024-07-24 10:52:54.009550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.009559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.022 [2024-07-24 10:52:54.009566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.009575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x183400 00:30:49.022 [2024-07-24 10:52:54.009582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.009592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183400 00:30:49.022 [2024-07-24 10:52:54.009599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.009608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x183400 00:30:49.022 [2024-07-24 10:52:54.009615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.009625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183400 00:30:49.022 [2024-07-24 10:52:54.009631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.009641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.022 [2024-07-24 10:52:54.009648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.009657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.022 [2024-07-24 10:52:54.009664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.009672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.022 [2024-07-24 10:52:54.009679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.009689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x183400 00:30:49.022 [2024-07-24 10:52:54.009696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.009705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.022 [2024-07-24 10:52:54.009712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.009723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.022 [2024-07-24 10:52:54.009730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.009739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.022 [2024-07-24 10:52:54.009757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.011414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.022 [2024-07-24 10:52:54.011432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.011451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x183400 00:30:49.022 [2024-07-24 10:52:54.011459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.011470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183400 00:30:49.022 [2024-07-24 10:52:54.011477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.011486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x183400 00:30:49.022 [2024-07-24 10:52:54.011498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.011508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183400 00:30:49.022 [2024-07-24 10:52:54.011515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.011526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.022 [2024-07-24 10:52:54.011533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:49.022 [2024-07-24 10:52:54.011542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x183400 00:30:49.023 [2024-07-24 10:52:54.011549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.011558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.023 [2024-07-24 10:52:54.011565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.011574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.023 [2024-07-24 10:52:54.011580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.011589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x183400 00:30:49.023 [2024-07-24 10:52:54.011596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.011608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183400 00:30:49.023 [2024-07-24 10:52:54.011615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.011624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183400 00:30:49.023 [2024-07-24 10:52:54.011642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.011651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183400 00:30:49.023 [2024-07-24 10:52:54.011657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.011666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183400 00:30:49.023 [2024-07-24 10:52:54.011673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.011849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.023 [2024-07-24 10:52:54.011857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.011867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183400 00:30:49.023 [2024-07-24 10:52:54.011874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.011883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x183400 00:30:49.023 [2024-07-24 10:52:54.011889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.011899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.023 [2024-07-24 10:52:54.011906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.011915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.023 [2024-07-24 10:52:54.011922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.011930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x183400 00:30:49.023 [2024-07-24 10:52:54.011937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.011946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x183400 00:30:49.023 [2024-07-24 10:52:54.011952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.011961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.023 [2024-07-24 10:52:54.011969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.011978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183400 00:30:49.023 [2024-07-24 10:52:54.011984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.011993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183400 00:30:49.023 [2024-07-24 10:52:54.012000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.012008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.023 [2024-07-24 10:52:54.012015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.012024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.023 [2024-07-24 10:52:54.012030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.012039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183400 00:30:49.023 [2024-07-24 10:52:54.012046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.012055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.023 [2024-07-24 10:52:54.012062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.012071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183400 00:30:49.023 [2024-07-24 10:52:54.012078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.012086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.023 [2024-07-24 10:52:54.012093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.012101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.023 [2024-07-24 10:52:54.012108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.012117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183400 00:30:49.023 [2024-07-24 10:52:54.012124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.012133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183400 00:30:49.023 [2024-07-24 10:52:54.012140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.012150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.023 [2024-07-24 10:52:54.012156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:49.023 [2024-07-24 10:52:54.012165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.023 [2024-07-24 10:52:54.012171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:49.023 Received shutdown signal, test time was about 25.999486 seconds 00:30:49.023 00:30:49.023 Latency(us) 00:30:49.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:49.023 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:49.023 Verification LBA range: start 0x0 length 0x4000 00:30:49.023 Nvme0n1 : 26.00 15620.42 61.02 0.00 0.00 8174.68 901.12 3019898.88 00:30:49.023 =================================================================================================================== 00:30:49.023 Total : 15620.42 61.02 0.00 0.00 8174.68 901.12 3019898.88 00:30:49.023 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:49.023 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:30:49.023 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:49.023 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:30:49.023 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:49.023 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:30:49.023 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:30:49.023 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:30:49.023 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:30:49.023 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:49.023 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:30:49.023 rmmod nvme_rdma 00:30:49.283 rmmod nvme_fabrics 00:30:49.283 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:49.283 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:30:49.283 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:30:49.283 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2383132 ']' 00:30:49.283 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2383132 00:30:49.283 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2383132 ']' 00:30:49.283 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2383132 00:30:49.283 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:30:49.283 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:49.283 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2383132 00:30:49.283 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:49.283 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:49.283 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2383132' 00:30:49.283 killing process with pid 2383132 00:30:49.283 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2383132 00:30:49.283 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2383132 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:30:49.543 00:30:49.543 real 0m34.004s 00:30:49.543 user 1m40.537s 00:30:49.543 sys 0m6.890s 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:49.543 ************************************ 00:30:49.543 END TEST nvmf_host_multipath_status 00:30:49.543 ************************************ 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.543 ************************************ 00:30:49.543 START TEST nvmf_discovery_remove_ifc 00:30:49.543 ************************************ 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:30:49.543 * Looking for test storage... 00:30:49.543 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:30:49.543 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:30:49.543 00:30:49.543 real 0m0.103s 00:30:49.543 user 0m0.047s 00:30:49.543 sys 0m0.063s 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.543 ************************************ 00:30:49.543 END TEST nvmf_discovery_remove_ifc 00:30:49.543 ************************************ 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.543 ************************************ 00:30:49.543 START TEST nvmf_identify_kernel_target 00:30:49.543 ************************************ 00:30:49.543 10:52:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:30:49.803 * Looking for test storage... 00:30:49.803 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:30:49.803 10:52:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:30:55.076 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:30:55.077 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:30:55.077 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:30:55.077 Found net devices under 0000:da:00.0: mlx_0_0 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:30:55.077 Found net devices under 0000:da:00.1: mlx_0_1 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # rdma_device_init 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # uname 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:30:55.077 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:55.077 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:30:55.077 altname enp218s0f0np0 00:30:55.077 altname ens818f0np0 00:30:55.077 inet 192.168.100.8/24 scope global mlx_0_0 00:30:55.077 valid_lft forever preferred_lft forever 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:30:55.077 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:55.077 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:30:55.077 altname enp218s0f1np1 00:30:55.077 altname ens818f1np1 00:30:55.077 inet 192.168.100.9/24 scope global mlx_0_1 00:30:55.077 valid_lft forever preferred_lft forever 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:30:55.077 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:30:55.078 192.168.100.9' 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:30:55.078 192.168.100.9' 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # head -n 1 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:30:55.078 192.168.100.9' 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # head -n 1 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # tail -n +2 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:55.078 10:53:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:30:57.615 Waiting for block devices as requested 00:30:57.875 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:30:57.875 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:57.875 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:58.134 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:58.134 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:58.134 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:58.134 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:58.393 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:58.393 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:58.393 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:58.393 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:58.651 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:58.651 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:58.651 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:58.909 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:58.909 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:58.909 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:59.168 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:59.168 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:59.168 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:30:59.168 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:30:59.168 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:59.168 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:59.168 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:30:59.168 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:59.169 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:59.169 No valid GPT data, bailing 00:30:59.169 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:59.169 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:30:59.169 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:30:59.169 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:30:59.169 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:30:59.169 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:59.169 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:59.169 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:59.169 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:59.169 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:30:59.169 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:30:59.169 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:30:59.169 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:30:59.169 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo rdma 00:30:59.169 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:30:59.169 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:30:59.169 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:59.169 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:30:59.428 00:30:59.428 Discovery Log Number of Records 2, Generation counter 2 00:30:59.428 =====Discovery Log Entry 0====== 00:30:59.428 trtype: rdma 00:30:59.428 adrfam: ipv4 00:30:59.428 subtype: current discovery subsystem 00:30:59.428 treq: not specified, sq flow control disable supported 00:30:59.428 portid: 1 00:30:59.428 trsvcid: 4420 00:30:59.428 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:59.428 traddr: 192.168.100.8 00:30:59.428 eflags: none 00:30:59.428 rdma_prtype: not specified 00:30:59.428 rdma_qptype: connected 00:30:59.428 rdma_cms: rdma-cm 00:30:59.428 rdma_pkey: 0x0000 00:30:59.428 =====Discovery Log Entry 1====== 00:30:59.428 trtype: rdma 00:30:59.428 adrfam: ipv4 00:30:59.428 subtype: nvme subsystem 00:30:59.429 treq: not specified, sq flow control disable supported 00:30:59.429 portid: 1 00:30:59.429 trsvcid: 4420 00:30:59.429 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:59.429 traddr: 192.168.100.8 00:30:59.429 eflags: none 00:30:59.429 rdma_prtype: not specified 00:30:59.429 rdma_qptype: connected 00:30:59.429 rdma_cms: rdma-cm 00:30:59.429 rdma_pkey: 0x0000 00:30:59.429 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:30:59.429 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:30:59.429 EAL: No free 2048 kB hugepages reported on node 1 00:30:59.429 ===================================================== 00:30:59.429 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:59.429 ===================================================== 00:30:59.429 Controller Capabilities/Features 00:30:59.429 ================================ 00:30:59.429 Vendor ID: 0000 00:30:59.429 Subsystem Vendor ID: 0000 00:30:59.429 Serial Number: 3615fc1c219d0d581e4e 00:30:59.429 Model Number: Linux 00:30:59.429 Firmware Version: 6.7.0-68 00:30:59.429 Recommended Arb Burst: 0 00:30:59.429 IEEE OUI Identifier: 00 00 00 00:30:59.429 Multi-path I/O 00:30:59.429 May have multiple subsystem ports: No 00:30:59.429 May have multiple controllers: No 00:30:59.429 Associated with SR-IOV VF: No 00:30:59.429 Max Data Transfer Size: Unlimited 00:30:59.429 Max Number of Namespaces: 0 00:30:59.429 Max Number of I/O Queues: 1024 00:30:59.429 NVMe Specification Version (VS): 1.3 00:30:59.429 NVMe Specification Version (Identify): 1.3 00:30:59.429 Maximum Queue Entries: 128 00:30:59.429 Contiguous Queues Required: No 00:30:59.429 Arbitration Mechanisms Supported 00:30:59.429 Weighted Round Robin: Not Supported 00:30:59.429 Vendor Specific: Not Supported 00:30:59.429 Reset Timeout: 7500 ms 00:30:59.429 Doorbell Stride: 4 bytes 00:30:59.429 NVM Subsystem Reset: Not Supported 00:30:59.429 Command Sets Supported 00:30:59.429 NVM Command Set: Supported 00:30:59.429 Boot Partition: Not Supported 00:30:59.429 Memory Page Size Minimum: 4096 bytes 00:30:59.429 Memory Page Size Maximum: 4096 bytes 00:30:59.429 Persistent Memory Region: Not Supported 00:30:59.429 Optional Asynchronous Events Supported 00:30:59.429 Namespace Attribute Notices: Not Supported 00:30:59.429 Firmware Activation Notices: Not Supported 00:30:59.429 ANA Change Notices: Not Supported 00:30:59.429 PLE Aggregate Log Change Notices: Not Supported 00:30:59.429 LBA Status Info Alert Notices: Not Supported 00:30:59.429 EGE Aggregate Log Change Notices: Not Supported 00:30:59.429 Normal NVM Subsystem Shutdown event: Not Supported 00:30:59.429 Zone Descriptor Change Notices: Not Supported 00:30:59.429 Discovery Log Change Notices: Supported 00:30:59.429 Controller Attributes 00:30:59.429 128-bit Host Identifier: Not Supported 00:30:59.429 Non-Operational Permissive Mode: Not Supported 00:30:59.429 NVM Sets: Not Supported 00:30:59.429 Read Recovery Levels: Not Supported 00:30:59.429 Endurance Groups: Not Supported 00:30:59.429 Predictable Latency Mode: Not Supported 00:30:59.429 Traffic Based Keep ALive: Not Supported 00:30:59.429 Namespace Granularity: Not Supported 00:30:59.429 SQ Associations: Not Supported 00:30:59.429 UUID List: Not Supported 00:30:59.429 Multi-Domain Subsystem: Not Supported 00:30:59.429 Fixed Capacity Management: Not Supported 00:30:59.429 Variable Capacity Management: Not Supported 00:30:59.429 Delete Endurance Group: Not Supported 00:30:59.429 Delete NVM Set: Not Supported 00:30:59.429 Extended LBA Formats Supported: Not Supported 00:30:59.429 Flexible Data Placement Supported: Not Supported 00:30:59.429 00:30:59.429 Controller Memory Buffer Support 00:30:59.429 ================================ 00:30:59.429 Supported: No 00:30:59.429 00:30:59.429 Persistent Memory Region Support 00:30:59.429 ================================ 00:30:59.429 Supported: No 00:30:59.429 00:30:59.429 Admin Command Set Attributes 00:30:59.429 ============================ 00:30:59.429 Security Send/Receive: Not Supported 00:30:59.429 Format NVM: Not Supported 00:30:59.429 Firmware Activate/Download: Not Supported 00:30:59.429 Namespace Management: Not Supported 00:30:59.429 Device Self-Test: Not Supported 00:30:59.429 Directives: Not Supported 00:30:59.429 NVMe-MI: Not Supported 00:30:59.429 Virtualization Management: Not Supported 00:30:59.429 Doorbell Buffer Config: Not Supported 00:30:59.429 Get LBA Status Capability: Not Supported 00:30:59.429 Command & Feature Lockdown Capability: Not Supported 00:30:59.429 Abort Command Limit: 1 00:30:59.429 Async Event Request Limit: 1 00:30:59.429 Number of Firmware Slots: N/A 00:30:59.429 Firmware Slot 1 Read-Only: N/A 00:30:59.429 Firmware Activation Without Reset: N/A 00:30:59.429 Multiple Update Detection Support: N/A 00:30:59.429 Firmware Update Granularity: No Information Provided 00:30:59.429 Per-Namespace SMART Log: No 00:30:59.429 Asymmetric Namespace Access Log Page: Not Supported 00:30:59.429 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:59.429 Command Effects Log Page: Not Supported 00:30:59.429 Get Log Page Extended Data: Supported 00:30:59.429 Telemetry Log Pages: Not Supported 00:30:59.429 Persistent Event Log Pages: Not Supported 00:30:59.429 Supported Log Pages Log Page: May Support 00:30:59.429 Commands Supported & Effects Log Page: Not Supported 00:30:59.429 Feature Identifiers & Effects Log Page:May Support 00:30:59.429 NVMe-MI Commands & Effects Log Page: May Support 00:30:59.429 Data Area 4 for Telemetry Log: Not Supported 00:30:59.429 Error Log Page Entries Supported: 1 00:30:59.429 Keep Alive: Not Supported 00:30:59.429 00:30:59.429 NVM Command Set Attributes 00:30:59.429 ========================== 00:30:59.429 Submission Queue Entry Size 00:30:59.429 Max: 1 00:30:59.429 Min: 1 00:30:59.429 Completion Queue Entry Size 00:30:59.429 Max: 1 00:30:59.429 Min: 1 00:30:59.429 Number of Namespaces: 0 00:30:59.429 Compare Command: Not Supported 00:30:59.429 Write Uncorrectable Command: Not Supported 00:30:59.429 Dataset Management Command: Not Supported 00:30:59.429 Write Zeroes Command: Not Supported 00:30:59.429 Set Features Save Field: Not Supported 00:30:59.429 Reservations: Not Supported 00:30:59.429 Timestamp: Not Supported 00:30:59.429 Copy: Not Supported 00:30:59.429 Volatile Write Cache: Not Present 00:30:59.429 Atomic Write Unit (Normal): 1 00:30:59.429 Atomic Write Unit (PFail): 1 00:30:59.429 Atomic Compare & Write Unit: 1 00:30:59.429 Fused Compare & Write: Not Supported 00:30:59.429 Scatter-Gather List 00:30:59.429 SGL Command Set: Supported 00:30:59.429 SGL Keyed: Supported 00:30:59.429 SGL Bit Bucket Descriptor: Not Supported 00:30:59.429 SGL Metadata Pointer: Not Supported 00:30:59.429 Oversized SGL: Not Supported 00:30:59.429 SGL Metadata Address: Not Supported 00:30:59.429 SGL Offset: Supported 00:30:59.429 Transport SGL Data Block: Not Supported 00:30:59.429 Replay Protected Memory Block: Not Supported 00:30:59.429 00:30:59.429 Firmware Slot Information 00:30:59.429 ========================= 00:30:59.429 Active slot: 0 00:30:59.429 00:30:59.429 00:30:59.429 Error Log 00:30:59.429 ========= 00:30:59.429 00:30:59.429 Active Namespaces 00:30:59.429 ================= 00:30:59.429 Discovery Log Page 00:30:59.429 ================== 00:30:59.429 Generation Counter: 2 00:30:59.429 Number of Records: 2 00:30:59.429 Record Format: 0 00:30:59.429 00:30:59.429 Discovery Log Entry 0 00:30:59.429 ---------------------- 00:30:59.429 Transport Type: 1 (RDMA) 00:30:59.429 Address Family: 1 (IPv4) 00:30:59.429 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:59.429 Entry Flags: 00:30:59.429 Duplicate Returned Information: 0 00:30:59.429 Explicit Persistent Connection Support for Discovery: 0 00:30:59.429 Transport Requirements: 00:30:59.429 Secure Channel: Not Specified 00:30:59.429 Port ID: 1 (0x0001) 00:30:59.429 Controller ID: 65535 (0xffff) 00:30:59.429 Admin Max SQ Size: 32 00:30:59.429 Transport Service Identifier: 4420 00:30:59.429 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:59.429 Transport Address: 192.168.100.8 00:30:59.429 Transport Specific Address Subtype - RDMA 00:30:59.429 RDMA QP Service Type: 1 (Reliable Connected) 00:30:59.429 RDMA Provider Type: 1 (No provider specified) 00:30:59.429 RDMA CM Service: 1 (RDMA_CM) 00:30:59.429 Discovery Log Entry 1 00:30:59.429 ---------------------- 00:30:59.429 Transport Type: 1 (RDMA) 00:30:59.429 Address Family: 1 (IPv4) 00:30:59.429 Subsystem Type: 2 (NVM Subsystem) 00:30:59.429 Entry Flags: 00:30:59.429 Duplicate Returned Information: 0 00:30:59.430 Explicit Persistent Connection Support for Discovery: 0 00:30:59.430 Transport Requirements: 00:30:59.430 Secure Channel: Not Specified 00:30:59.430 Port ID: 1 (0x0001) 00:30:59.430 Controller ID: 65535 (0xffff) 00:30:59.430 Admin Max SQ Size: 32 00:30:59.430 Transport Service Identifier: 4420 00:30:59.430 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:30:59.430 Transport Address: 192.168.100.8 00:30:59.430 Transport Specific Address Subtype - RDMA 00:30:59.430 RDMA QP Service Type: 1 (Reliable Connected) 00:30:59.430 RDMA Provider Type: 1 (No provider specified) 00:30:59.430 RDMA CM Service: 1 (RDMA_CM) 00:30:59.430 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:59.430 EAL: No free 2048 kB hugepages reported on node 1 00:30:59.430 get_feature(0x01) failed 00:30:59.430 get_feature(0x02) failed 00:30:59.430 get_feature(0x04) failed 00:30:59.430 ===================================================== 00:30:59.430 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:30:59.430 ===================================================== 00:30:59.430 Controller Capabilities/Features 00:30:59.430 ================================ 00:30:59.430 Vendor ID: 0000 00:30:59.430 Subsystem Vendor ID: 0000 00:30:59.430 Serial Number: ad50234c88e8b8fab2f0 00:30:59.430 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:30:59.430 Firmware Version: 6.7.0-68 00:30:59.430 Recommended Arb Burst: 6 00:30:59.430 IEEE OUI Identifier: 00 00 00 00:30:59.430 Multi-path I/O 00:30:59.430 May have multiple subsystem ports: Yes 00:30:59.430 May have multiple controllers: Yes 00:30:59.430 Associated with SR-IOV VF: No 00:30:59.430 Max Data Transfer Size: 1048576 00:30:59.430 Max Number of Namespaces: 1024 00:30:59.430 Max Number of I/O Queues: 128 00:30:59.430 NVMe Specification Version (VS): 1.3 00:30:59.430 NVMe Specification Version (Identify): 1.3 00:30:59.430 Maximum Queue Entries: 128 00:30:59.430 Contiguous Queues Required: No 00:30:59.430 Arbitration Mechanisms Supported 00:30:59.430 Weighted Round Robin: Not Supported 00:30:59.430 Vendor Specific: Not Supported 00:30:59.430 Reset Timeout: 7500 ms 00:30:59.430 Doorbell Stride: 4 bytes 00:30:59.430 NVM Subsystem Reset: Not Supported 00:30:59.430 Command Sets Supported 00:30:59.430 NVM Command Set: Supported 00:30:59.430 Boot Partition: Not Supported 00:30:59.430 Memory Page Size Minimum: 4096 bytes 00:30:59.430 Memory Page Size Maximum: 4096 bytes 00:30:59.430 Persistent Memory Region: Not Supported 00:30:59.430 Optional Asynchronous Events Supported 00:30:59.430 Namespace Attribute Notices: Supported 00:30:59.430 Firmware Activation Notices: Not Supported 00:30:59.430 ANA Change Notices: Supported 00:30:59.430 PLE Aggregate Log Change Notices: Not Supported 00:30:59.430 LBA Status Info Alert Notices: Not Supported 00:30:59.430 EGE Aggregate Log Change Notices: Not Supported 00:30:59.430 Normal NVM Subsystem Shutdown event: Not Supported 00:30:59.430 Zone Descriptor Change Notices: Not Supported 00:30:59.430 Discovery Log Change Notices: Not Supported 00:30:59.430 Controller Attributes 00:30:59.430 128-bit Host Identifier: Supported 00:30:59.430 Non-Operational Permissive Mode: Not Supported 00:30:59.430 NVM Sets: Not Supported 00:30:59.430 Read Recovery Levels: Not Supported 00:30:59.430 Endurance Groups: Not Supported 00:30:59.430 Predictable Latency Mode: Not Supported 00:30:59.430 Traffic Based Keep ALive: Supported 00:30:59.430 Namespace Granularity: Not Supported 00:30:59.430 SQ Associations: Not Supported 00:30:59.430 UUID List: Not Supported 00:30:59.430 Multi-Domain Subsystem: Not Supported 00:30:59.430 Fixed Capacity Management: Not Supported 00:30:59.430 Variable Capacity Management: Not Supported 00:30:59.430 Delete Endurance Group: Not Supported 00:30:59.430 Delete NVM Set: Not Supported 00:30:59.430 Extended LBA Formats Supported: Not Supported 00:30:59.430 Flexible Data Placement Supported: Not Supported 00:30:59.430 00:30:59.430 Controller Memory Buffer Support 00:30:59.430 ================================ 00:30:59.430 Supported: No 00:30:59.430 00:30:59.430 Persistent Memory Region Support 00:30:59.430 ================================ 00:30:59.430 Supported: No 00:30:59.430 00:30:59.430 Admin Command Set Attributes 00:30:59.430 ============================ 00:30:59.430 Security Send/Receive: Not Supported 00:30:59.430 Format NVM: Not Supported 00:30:59.430 Firmware Activate/Download: Not Supported 00:30:59.430 Namespace Management: Not Supported 00:30:59.430 Device Self-Test: Not Supported 00:30:59.430 Directives: Not Supported 00:30:59.430 NVMe-MI: Not Supported 00:30:59.430 Virtualization Management: Not Supported 00:30:59.430 Doorbell Buffer Config: Not Supported 00:30:59.430 Get LBA Status Capability: Not Supported 00:30:59.430 Command & Feature Lockdown Capability: Not Supported 00:30:59.430 Abort Command Limit: 4 00:30:59.430 Async Event Request Limit: 4 00:30:59.430 Number of Firmware Slots: N/A 00:30:59.430 Firmware Slot 1 Read-Only: N/A 00:30:59.430 Firmware Activation Without Reset: N/A 00:30:59.430 Multiple Update Detection Support: N/A 00:30:59.430 Firmware Update Granularity: No Information Provided 00:30:59.430 Per-Namespace SMART Log: Yes 00:30:59.430 Asymmetric Namespace Access Log Page: Supported 00:30:59.430 ANA Transition Time : 10 sec 00:30:59.430 00:30:59.430 Asymmetric Namespace Access Capabilities 00:30:59.430 ANA Optimized State : Supported 00:30:59.430 ANA Non-Optimized State : Supported 00:30:59.430 ANA Inaccessible State : Supported 00:30:59.430 ANA Persistent Loss State : Supported 00:30:59.430 ANA Change State : Supported 00:30:59.430 ANAGRPID is not changed : No 00:30:59.430 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:30:59.430 00:30:59.430 ANA Group Identifier Maximum : 128 00:30:59.430 Number of ANA Group Identifiers : 128 00:30:59.430 Max Number of Allowed Namespaces : 1024 00:30:59.430 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:30:59.430 Command Effects Log Page: Supported 00:30:59.430 Get Log Page Extended Data: Supported 00:30:59.430 Telemetry Log Pages: Not Supported 00:30:59.430 Persistent Event Log Pages: Not Supported 00:30:59.430 Supported Log Pages Log Page: May Support 00:30:59.430 Commands Supported & Effects Log Page: Not Supported 00:30:59.430 Feature Identifiers & Effects Log Page:May Support 00:30:59.430 NVMe-MI Commands & Effects Log Page: May Support 00:30:59.430 Data Area 4 for Telemetry Log: Not Supported 00:30:59.430 Error Log Page Entries Supported: 128 00:30:59.430 Keep Alive: Supported 00:30:59.430 Keep Alive Granularity: 1000 ms 00:30:59.430 00:30:59.430 NVM Command Set Attributes 00:30:59.430 ========================== 00:30:59.430 Submission Queue Entry Size 00:30:59.430 Max: 64 00:30:59.430 Min: 64 00:30:59.430 Completion Queue Entry Size 00:30:59.430 Max: 16 00:30:59.430 Min: 16 00:30:59.430 Number of Namespaces: 1024 00:30:59.430 Compare Command: Not Supported 00:30:59.430 Write Uncorrectable Command: Not Supported 00:30:59.430 Dataset Management Command: Supported 00:30:59.430 Write Zeroes Command: Supported 00:30:59.430 Set Features Save Field: Not Supported 00:30:59.430 Reservations: Not Supported 00:30:59.430 Timestamp: Not Supported 00:30:59.430 Copy: Not Supported 00:30:59.430 Volatile Write Cache: Present 00:30:59.430 Atomic Write Unit (Normal): 1 00:30:59.430 Atomic Write Unit (PFail): 1 00:30:59.430 Atomic Compare & Write Unit: 1 00:30:59.430 Fused Compare & Write: Not Supported 00:30:59.430 Scatter-Gather List 00:30:59.430 SGL Command Set: Supported 00:30:59.430 SGL Keyed: Supported 00:30:59.430 SGL Bit Bucket Descriptor: Not Supported 00:30:59.430 SGL Metadata Pointer: Not Supported 00:30:59.430 Oversized SGL: Not Supported 00:30:59.430 SGL Metadata Address: Not Supported 00:30:59.430 SGL Offset: Supported 00:30:59.430 Transport SGL Data Block: Not Supported 00:30:59.430 Replay Protected Memory Block: Not Supported 00:30:59.430 00:30:59.430 Firmware Slot Information 00:30:59.430 ========================= 00:30:59.430 Active slot: 0 00:30:59.430 00:30:59.430 Asymmetric Namespace Access 00:30:59.430 =========================== 00:30:59.430 Change Count : 0 00:30:59.430 Number of ANA Group Descriptors : 1 00:30:59.430 ANA Group Descriptor : 0 00:30:59.430 ANA Group ID : 1 00:30:59.430 Number of NSID Values : 1 00:30:59.430 Change Count : 0 00:30:59.430 ANA State : 1 00:30:59.430 Namespace Identifier : 1 00:30:59.430 00:30:59.430 Commands Supported and Effects 00:30:59.430 ============================== 00:30:59.430 Admin Commands 00:30:59.430 -------------- 00:30:59.430 Get Log Page (02h): Supported 00:30:59.430 Identify (06h): Supported 00:30:59.430 Abort (08h): Supported 00:30:59.431 Set Features (09h): Supported 00:30:59.431 Get Features (0Ah): Supported 00:30:59.431 Asynchronous Event Request (0Ch): Supported 00:30:59.431 Keep Alive (18h): Supported 00:30:59.431 I/O Commands 00:30:59.431 ------------ 00:30:59.431 Flush (00h): Supported 00:30:59.431 Write (01h): Supported LBA-Change 00:30:59.431 Read (02h): Supported 00:30:59.431 Write Zeroes (08h): Supported LBA-Change 00:30:59.431 Dataset Management (09h): Supported 00:30:59.431 00:30:59.431 Error Log 00:30:59.431 ========= 00:30:59.431 Entry: 0 00:30:59.431 Error Count: 0x3 00:30:59.431 Submission Queue Id: 0x0 00:30:59.431 Command Id: 0x5 00:30:59.431 Phase Bit: 0 00:30:59.431 Status Code: 0x2 00:30:59.431 Status Code Type: 0x0 00:30:59.431 Do Not Retry: 1 00:30:59.690 Error Location: 0x28 00:30:59.690 LBA: 0x0 00:30:59.690 Namespace: 0x0 00:30:59.690 Vendor Log Page: 0x0 00:30:59.690 ----------- 00:30:59.690 Entry: 1 00:30:59.690 Error Count: 0x2 00:30:59.690 Submission Queue Id: 0x0 00:30:59.690 Command Id: 0x5 00:30:59.690 Phase Bit: 0 00:30:59.690 Status Code: 0x2 00:30:59.690 Status Code Type: 0x0 00:30:59.690 Do Not Retry: 1 00:30:59.690 Error Location: 0x28 00:30:59.690 LBA: 0x0 00:30:59.690 Namespace: 0x0 00:30:59.690 Vendor Log Page: 0x0 00:30:59.690 ----------- 00:30:59.690 Entry: 2 00:30:59.690 Error Count: 0x1 00:30:59.690 Submission Queue Id: 0x0 00:30:59.690 Command Id: 0x0 00:30:59.690 Phase Bit: 0 00:30:59.690 Status Code: 0x2 00:30:59.690 Status Code Type: 0x0 00:30:59.690 Do Not Retry: 1 00:30:59.690 Error Location: 0x28 00:30:59.690 LBA: 0x0 00:30:59.690 Namespace: 0x0 00:30:59.690 Vendor Log Page: 0x0 00:30:59.690 00:30:59.690 Number of Queues 00:30:59.690 ================ 00:30:59.690 Number of I/O Submission Queues: 128 00:30:59.690 Number of I/O Completion Queues: 128 00:30:59.690 00:30:59.690 ZNS Specific Controller Data 00:30:59.690 ============================ 00:30:59.690 Zone Append Size Limit: 0 00:30:59.690 00:30:59.690 00:30:59.690 Active Namespaces 00:30:59.690 ================= 00:30:59.690 get_feature(0x05) failed 00:30:59.690 Namespace ID:1 00:30:59.690 Command Set Identifier: NVM (00h) 00:30:59.690 Deallocate: Supported 00:30:59.690 Deallocated/Unwritten Error: Not Supported 00:30:59.690 Deallocated Read Value: Unknown 00:30:59.690 Deallocate in Write Zeroes: Not Supported 00:30:59.690 Deallocated Guard Field: 0xFFFF 00:30:59.690 Flush: Supported 00:30:59.690 Reservation: Not Supported 00:30:59.690 Namespace Sharing Capabilities: Multiple Controllers 00:30:59.690 Size (in LBAs): 3125627568 (1490GiB) 00:30:59.690 Capacity (in LBAs): 3125627568 (1490GiB) 00:30:59.690 Utilization (in LBAs): 3125627568 (1490GiB) 00:30:59.690 UUID: 5a613c1c-9c35-487d-850d-914ca8ab3296 00:30:59.690 Thin Provisioning: Not Supported 00:30:59.690 Per-NS Atomic Units: Yes 00:30:59.690 Atomic Boundary Size (Normal): 0 00:30:59.690 Atomic Boundary Size (PFail): 0 00:30:59.690 Atomic Boundary Offset: 0 00:30:59.690 NGUID/EUI64 Never Reused: No 00:30:59.690 ANA group ID: 1 00:30:59.690 Namespace Write Protected: No 00:30:59.690 Number of LBA Formats: 1 00:30:59.690 Current LBA Format: LBA Format #00 00:30:59.690 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:59.690 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:30:59.690 rmmod nvme_rdma 00:30:59.690 rmmod nvme_fabrics 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:30:59.690 10:53:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:31:02.224 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:02.224 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:02.224 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:02.224 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:02.224 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:02.484 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:02.484 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:02.484 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:02.484 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:02.484 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:02.484 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:02.484 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:02.484 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:02.484 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:02.484 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:02.484 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:03.862 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:31:03.862 00:31:03.862 real 0m14.323s 00:31:03.862 user 0m4.119s 00:31:03.862 sys 0m8.012s 00:31:03.862 10:53:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:03.862 10:53:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:03.862 ************************************ 00:31:03.862 END TEST nvmf_identify_kernel_target 00:31:03.862 ************************************ 00:31:04.121 10:53:11 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:31:04.121 10:53:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:04.121 10:53:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:04.121 10:53:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.121 ************************************ 00:31:04.121 START TEST nvmf_auth_host 00:31:04.121 ************************************ 00:31:04.121 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:31:04.121 * Looking for test storage... 00:31:04.121 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:04.122 10:53:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:31:09.401 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:31:09.401 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:31:09.401 Found net devices under 0000:da:00.0: mlx_0_0 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:31:09.401 Found net devices under 0000:da:00.1: mlx_0_1 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # rdma_device_init 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # uname 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:09.401 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:31:09.402 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:09.402 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:31:09.402 altname enp218s0f0np0 00:31:09.402 altname ens818f0np0 00:31:09.402 inet 192.168.100.8/24 scope global mlx_0_0 00:31:09.402 valid_lft forever preferred_lft forever 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:31:09.402 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:09.402 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:31:09.402 altname enp218s0f1np1 00:31:09.402 altname ens818f1np1 00:31:09.402 inet 192.168.100.9/24 scope global mlx_0_1 00:31:09.402 valid_lft forever preferred_lft forever 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:31:09.402 192.168.100.9' 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:31:09.402 192.168.100.9' 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # head -n 1 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:31:09.402 192.168.100.9' 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # tail -n +2 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # head -n 1 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2396907 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2396907 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2396907 ']' 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:09.402 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.662 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:09.662 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:31:09.662 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:09.662 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:09.662 10:53:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c370cbadeab039364ed3f26de638b72a 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.oGg 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c370cbadeab039364ed3f26de638b72a 0 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c370cbadeab039364ed3f26de638b72a 0 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c370cbadeab039364ed3f26de638b72a 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.oGg 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.oGg 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.oGg 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=918993add5c9f0bf0bd1377cfee1da9a2258bcd8e8cd36e6b42fa0334709e5af 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Lkw 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 918993add5c9f0bf0bd1377cfee1da9a2258bcd8e8cd36e6b42fa0334709e5af 3 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 918993add5c9f0bf0bd1377cfee1da9a2258bcd8e8cd36e6b42fa0334709e5af 3 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=918993add5c9f0bf0bd1377cfee1da9a2258bcd8e8cd36e6b42fa0334709e5af 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:09.662 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:09.920 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Lkw 00:31:09.920 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Lkw 00:31:09.920 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Lkw 00:31:09.920 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:09.920 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:09.920 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:09.920 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:09.920 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:09.920 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:09.920 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:09.920 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f1bd3d56f6a80229a74f5413ce36073788eb3bb6b5cac6bc 00:31:09.920 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:09.920 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.MrS 00:31:09.920 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f1bd3d56f6a80229a74f5413ce36073788eb3bb6b5cac6bc 0 00:31:09.920 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f1bd3d56f6a80229a74f5413ce36073788eb3bb6b5cac6bc 0 00:31:09.920 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:09.920 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f1bd3d56f6a80229a74f5413ce36073788eb3bb6b5cac6bc 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.MrS 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.MrS 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.MrS 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e543d88fa7acbd965e51625a4f7a5ad9ea9509ff2d6912ed 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.0P4 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e543d88fa7acbd965e51625a4f7a5ad9ea9509ff2d6912ed 2 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e543d88fa7acbd965e51625a4f7a5ad9ea9509ff2d6912ed 2 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e543d88fa7acbd965e51625a4f7a5ad9ea9509ff2d6912ed 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.0P4 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.0P4 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.0P4 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7f5e4c88d6b9b7bfb4c26cac1b3773ad 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.QW0 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7f5e4c88d6b9b7bfb4c26cac1b3773ad 1 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7f5e4c88d6b9b7bfb4c26cac1b3773ad 1 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7f5e4c88d6b9b7bfb4c26cac1b3773ad 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.QW0 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.QW0 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.QW0 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=67f34f3a629cd74276e7f85f90b88475 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.hOw 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 67f34f3a629cd74276e7f85f90b88475 1 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 67f34f3a629cd74276e7f85f90b88475 1 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=67f34f3a629cd74276e7f85f90b88475 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:09.921 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.hOw 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.hOw 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.hOw 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cb185b011467f1cf51928feb7892c1205fc868f77ae0c5d8 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.aU0 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cb185b011467f1cf51928feb7892c1205fc868f77ae0c5d8 2 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cb185b011467f1cf51928feb7892c1205fc868f77ae0c5d8 2 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cb185b011467f1cf51928feb7892c1205fc868f77ae0c5d8 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.aU0 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.aU0 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.aU0 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b4297d5b7a9dd0d05a02cd80b3892868 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.aXN 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b4297d5b7a9dd0d05a02cd80b3892868 0 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b4297d5b7a9dd0d05a02cd80b3892868 0 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b4297d5b7a9dd0d05a02cd80b3892868 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.aXN 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.aXN 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.aXN 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0ab114ccc11c36aca8f6157f1036e5a993154071bcb1991bd02664b3eacac049 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.U95 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0ab114ccc11c36aca8f6157f1036e5a993154071bcb1991bd02664b3eacac049 3 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0ab114ccc11c36aca8f6157f1036e5a993154071bcb1991bd02664b3eacac049 3 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0ab114ccc11c36aca8f6157f1036e5a993154071bcb1991bd02664b3eacac049 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.U95 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.U95 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.U95 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2396907 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2396907 ']' 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:10.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:10.180 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oGg 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Lkw ]] 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Lkw 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.MrS 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.0P4 ]] 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0P4 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.QW0 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.hOw ]] 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hOw 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.aU0 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.aXN ]] 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.aXN 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.U95 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:10.440 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:10.441 10:53:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:31:12.973 Waiting for block devices as requested 00:31:13.232 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:31:13.232 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:13.232 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:13.490 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:13.490 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:13.490 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:13.749 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:13.749 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:13.749 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:13.749 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:14.007 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:14.007 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:14.007 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:14.007 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:14.266 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:14.266 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:14.266 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:14.833 No valid GPT data, bailing 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo rdma 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:31:14.833 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:31:15.092 00:31:15.092 Discovery Log Number of Records 2, Generation counter 2 00:31:15.092 =====Discovery Log Entry 0====== 00:31:15.092 trtype: rdma 00:31:15.092 adrfam: ipv4 00:31:15.092 subtype: current discovery subsystem 00:31:15.092 treq: not specified, sq flow control disable supported 00:31:15.092 portid: 1 00:31:15.092 trsvcid: 4420 00:31:15.092 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:15.092 traddr: 192.168.100.8 00:31:15.092 eflags: none 00:31:15.092 rdma_prtype: not specified 00:31:15.092 rdma_qptype: connected 00:31:15.092 rdma_cms: rdma-cm 00:31:15.092 rdma_pkey: 0x0000 00:31:15.092 =====Discovery Log Entry 1====== 00:31:15.092 trtype: rdma 00:31:15.092 adrfam: ipv4 00:31:15.092 subtype: nvme subsystem 00:31:15.092 treq: not specified, sq flow control disable supported 00:31:15.092 portid: 1 00:31:15.092 trsvcid: 4420 00:31:15.092 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:15.092 traddr: 192.168.100.8 00:31:15.092 eflags: none 00:31:15.092 rdma_prtype: not specified 00:31:15.092 rdma_qptype: connected 00:31:15.092 rdma_cms: rdma-cm 00:31:15.092 rdma_pkey: 0x0000 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: ]] 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.092 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:15.093 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:15.093 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:15.093 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:15.093 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:15.093 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:15.093 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:15.093 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:15.093 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:15.093 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:15.093 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:15.093 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:15.093 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.093 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.352 nvme0n1 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: ]] 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:15.352 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:15.353 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:15.353 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.353 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.353 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.353 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:15.353 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:15.353 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:15.353 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:15.353 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:15.353 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:15.353 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:15.353 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:15.353 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:15.353 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:15.353 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:15.353 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:15.353 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.353 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.613 nvme0n1 00:31:15.613 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.613 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:15.613 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:15.613 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.613 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.613 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.613 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.613 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:15.613 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.613 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.613 10:53:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: ]] 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.613 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.872 nvme0n1 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: ]] 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.872 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.131 nvme0n1 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: ]] 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.131 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.389 nvme0n1 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.389 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.648 10:53:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.648 nvme0n1 00:31:16.648 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.648 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.648 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.648 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.648 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.648 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.906 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.906 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.906 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.906 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.906 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.906 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: ]] 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.907 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.165 nvme0n1 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: ]] 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:17.165 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.166 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:17.166 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:17.166 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:17.166 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.166 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:17.166 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.166 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.166 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.166 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.166 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:17.166 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:17.166 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:17.166 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.166 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.166 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:17.166 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:17.166 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:17.166 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:17.166 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:17.166 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:17.166 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.166 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.425 nvme0n1 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: ]] 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.425 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.684 nvme0n1 00:31:17.684 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.684 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.684 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.684 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.684 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.684 10:53:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.684 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.684 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.684 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.684 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.684 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.684 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.684 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:17.684 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.684 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:17.684 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:17.684 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:17.684 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:17.684 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:17.684 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:17.684 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:17.684 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:17.684 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: ]] 00:31:17.684 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:17.684 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:17.684 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.684 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:17.684 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:17.685 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:17.685 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.685 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:17.685 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.685 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.685 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.685 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.685 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:17.685 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:17.685 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:17.685 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.685 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.685 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:17.685 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:17.685 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:17.685 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:17.685 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:17.685 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:17.685 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.685 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.944 nvme0n1 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.944 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.203 nvme0n1 00:31:18.203 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.203 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:18.203 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.203 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:18.203 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.203 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.203 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.203 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:18.203 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.203 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: ]] 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.462 10:53:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.739 nvme0n1 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: ]] 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.739 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.044 nvme0n1 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: ]] 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.044 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.303 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:19.303 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:19.303 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:19.303 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:19.303 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.303 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.303 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:19.303 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:19.303 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:19.303 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:19.303 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:19.303 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:19.303 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.303 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.562 nvme0n1 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: ]] 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.562 10:53:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.821 nvme0n1 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.821 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.389 nvme0n1 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: ]] 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.389 10:53:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.648 nvme0n1 00:31:20.648 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.648 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:20.648 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.648 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:20.648 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.648 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.648 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.648 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:20.648 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.648 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.906 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.906 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:20.906 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:20.906 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:20.906 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:20.906 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:20.906 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:20.906 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:20.906 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:20.906 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: ]] 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.907 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.164 nvme0n1 00:31:21.164 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.164 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.164 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.164 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:21.164 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.164 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.164 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.164 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.164 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.164 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: ]] 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.423 10:53:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.682 nvme0n1 00:31:21.682 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.682 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.682 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:21.682 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.682 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.682 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.682 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.682 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.682 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.682 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: ]] 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.941 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.200 nvme0n1 00:31:22.200 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.200 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:22.200 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:22.200 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.200 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.200 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.200 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:22.200 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:22.200 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.200 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.459 10:53:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.718 nvme0n1 00:31:22.719 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.719 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:22.719 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:22.719 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.719 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.719 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.719 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:22.719 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:22.719 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.719 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: ]] 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:22.978 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:22.979 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:22.979 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:22.979 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:22.979 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.979 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.547 nvme0n1 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: ]] 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.547 10:53:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.117 nvme0n1 00:31:24.117 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.117 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:24.117 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:24.117 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.117 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.117 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: ]] 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.377 10:53:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.946 nvme0n1 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: ]] 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:24.946 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.947 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.947 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.947 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:24.947 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:24.947 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:24.947 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:24.947 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:24.947 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:24.947 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:24.947 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:24.947 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:24.947 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:24.947 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:24.947 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:24.947 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.947 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.882 nvme0n1 00:31:25.882 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.882 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:25.882 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:25.882 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.882 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.882 10:53:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.882 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:25.882 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:25.882 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.882 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.882 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.882 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:25.882 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:25.882 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:25.882 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:25.882 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:25.882 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:25.882 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:25.882 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:25.882 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:25.882 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:25.882 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:25.882 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:25.882 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.883 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.449 nvme0n1 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: ]] 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:26.449 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:26.450 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:26.450 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.450 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.708 nvme0n1 00:31:26.708 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.708 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:26.709 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:26.709 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.709 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.709 10:53:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: ]] 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.709 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.968 nvme0n1 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: ]] 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.968 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.227 nvme0n1 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: ]] 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.227 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.228 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.228 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.228 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:27.228 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:27.228 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:27.228 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.228 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.228 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:27.228 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:27.228 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:27.228 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:27.228 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:27.228 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:27.228 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.228 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.486 nvme0n1 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:27.486 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.487 10:53:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.745 nvme0n1 00:31:27.745 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.745 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.745 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.745 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.745 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.745 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.745 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.745 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.745 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.745 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: ]] 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.004 nvme0n1 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.004 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: ]] 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.263 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.523 nvme0n1 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: ]] 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.523 10:53:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.783 nvme0n1 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: ]] 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.783 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.048 nvme0n1 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.048 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.306 nvme0n1 00:31:29.306 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.306 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.306 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.306 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.306 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.306 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.306 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.306 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.306 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.306 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: ]] 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.565 10:53:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.825 nvme0n1 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: ]] 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.825 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.083 nvme0n1 00:31:30.083 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.083 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.083 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:30.083 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.083 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.083 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.083 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.083 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.083 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.083 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: ]] 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.341 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.600 nvme0n1 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: ]] 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:30.600 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:30.601 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:30.601 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.601 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.601 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:30.601 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:30.601 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:30.601 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:30.601 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:30.601 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:30.601 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.601 10:53:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.860 nvme0n1 00:31:30.860 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.860 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.860 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:30.860 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.860 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.860 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.860 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.860 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.860 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.860 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.118 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.119 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.378 nvme0n1 00:31:31.378 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.378 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.378 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:31.378 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.378 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.378 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.378 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.378 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:31.378 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.378 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.378 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.378 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:31.378 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:31.378 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:31:31.378 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.378 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:31.378 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:31.378 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:31.378 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:31.378 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:31.378 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:31.378 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: ]] 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.379 10:53:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.950 nvme0n1 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: ]] 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.950 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.518 nvme0n1 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: ]] 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.518 10:53:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.777 nvme0n1 00:31:32.777 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.777 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.777 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.777 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.777 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.777 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: ]] 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.036 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.037 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:33.037 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:33.037 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:33.037 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.037 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.037 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:33.037 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:33.037 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:33.037 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:33.037 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:33.037 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:33.037 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.037 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.295 nvme0n1 00:31:33.296 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.296 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.296 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:33.296 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.296 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.296 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.555 10:53:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.813 nvme0n1 00:31:33.813 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.813 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.813 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:33.813 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.813 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.813 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.813 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.813 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.813 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.813 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: ]] 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.072 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.638 nvme0n1 00:31:34.638 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.638 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.638 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:34.638 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.638 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.638 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.638 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.638 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.638 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.638 10:53:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: ]] 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.638 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.206 nvme0n1 00:31:35.206 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.206 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.206 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.206 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.206 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.464 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.464 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.464 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.464 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.464 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.464 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.464 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.464 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:35.464 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.464 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:35.464 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:35.464 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:35.464 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:35.464 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:35.464 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:35.464 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: ]] 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.465 10:53:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.033 nvme0n1 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: ]] 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.033 10:53:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.979 nvme0n1 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.979 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.586 nvme0n1 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: ]] 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.586 10:53:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.845 nvme0n1 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: ]] 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.845 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.104 nvme0n1 00:31:38.104 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.104 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.104 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.104 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.104 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.104 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.104 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.104 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.104 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.104 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: ]] 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.105 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.363 nvme0n1 00:31:38.363 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.363 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.363 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.363 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.363 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.363 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.363 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.363 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.363 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.363 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.363 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.363 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: ]] 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.364 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.623 nvme0n1 00:31:38.623 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.623 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.623 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.623 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.623 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.623 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.623 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.623 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.623 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.623 10:53:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.623 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.881 nvme0n1 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: ]] 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:38.881 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.882 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.882 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.882 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.882 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:38.882 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:38.882 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:38.882 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.882 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.882 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:38.882 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:38.882 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:38.882 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:38.882 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:38.882 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:38.882 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.882 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.140 nvme0n1 00:31:39.140 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.140 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.140 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.140 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.140 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.140 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.140 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.140 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.140 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.140 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: ]] 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.398 nvme0n1 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.398 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: ]] 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.657 10:53:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.916 nvme0n1 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: ]] 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.916 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.175 nvme0n1 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.175 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.434 nvme0n1 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: ]] 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.434 10:53:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.002 nvme0n1 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: ]] 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.002 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.261 nvme0n1 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: ]] 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.261 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.520 nvme0n1 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: ]] 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.520 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.779 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.779 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.779 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:41.779 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.779 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.779 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.779 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.779 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:41.779 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:41.779 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:41.779 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:41.779 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:41.779 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:41.779 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.779 10:53:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.038 nvme0n1 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.038 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.297 nvme0n1 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: ]] 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.297 10:53:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.863 nvme0n1 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: ]] 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.863 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.864 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.864 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:42.864 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:42.864 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:42.864 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:42.864 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:42.864 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:42.864 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.864 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.431 nvme0n1 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: ]] 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.431 10:53:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.999 nvme0n1 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: ]] 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.999 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.258 nvme0n1 00:31:44.258 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.258 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.258 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.258 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.258 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.258 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.517 10:53:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.777 nvme0n1 00:31:44.777 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.777 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.777 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.777 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.777 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.777 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM3MGNiYWRlYWIwMzkzNjRlZDNmMjZkZTYzOGI3MmFCPj39: 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: ]] 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTE4OTkzYWRkNWM5ZjBiZjBiZDEzNzdjZmVlMWRhOWEyMjU4YmNkOGU4Y2QzNmU2YjQyZmEwMzM0NzA5ZTVhZgTcDbc=: 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.036 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.607 nvme0n1 00:31:45.607 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.607 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.607 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.607 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.607 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.607 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.607 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.607 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.607 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.607 10:53:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.607 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.607 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.607 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:31:45.607 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.607 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:45.607 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:45.607 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:45.607 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:45.607 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:45.607 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:45.607 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:45.607 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:45.607 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: ]] 00:31:45.607 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:45.607 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.608 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.553 nvme0n1 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Y1ZTRjODhkNmI5YjdiZmI0YzI2Y2FjMWIzNzczYWSAigEU: 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: ]] 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdmMzRmM2E2MjljZDc0Mjc2ZTdmODVmOTBiODg0NzVp16NS: 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.553 10:53:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.122 nvme0n1 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2IxODViMDExNDY3ZjFjZjUxOTI4ZmViNzg5MmMxMjA1ZmM4NjhmNzdhZTBjNWQ420kdDg==: 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: ]] 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjQyOTdkNWI3YTlkZDBkMDVhMDJjZDgwYjM4OTI4Nji2yNR2: 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.122 10:53:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.690 nvme0n1 00:31:47.690 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.690 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.690 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.690 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.690 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.690 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.690 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.690 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.690 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.690 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGFiMTE0Y2NjMTFjMzZhY2E4ZjYxNTdmMTAzNmU1YTk5MzE1NDA3MWJjYjE5OTFiZDAyNjY0YjNlYWNhYzA0OXhDhy8=: 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.950 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.520 nvme0n1 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFiZDNkNTZmNmE4MDIyOWE3NGY1NDEzY2UzNjA3Mzc4OGViM2JiNmI1Y2FjNmJjQ+C1ZA==: 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: ]] 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU0M2Q4OGZhN2FjYmQ5NjVlNTE2MjVhNGY3YTVhZDllYTk1MDlmZjJkNjkxMmVkpmUSkA==: 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.520 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.780 request: 00:31:48.780 { 00:31:48.780 "name": "nvme0", 00:31:48.780 "trtype": "rdma", 00:31:48.780 "traddr": "192.168.100.8", 00:31:48.780 "adrfam": "ipv4", 00:31:48.780 "trsvcid": "4420", 00:31:48.780 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:48.780 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:48.780 "prchk_reftag": false, 00:31:48.780 "prchk_guard": false, 00:31:48.780 "hdgst": false, 00:31:48.780 "ddgst": false, 00:31:48.780 "method": "bdev_nvme_attach_controller", 00:31:48.780 "req_id": 1 00:31:48.780 } 00:31:48.780 Got JSON-RPC error response 00:31:48.780 response: 00:31:48.780 { 00:31:48.780 "code": -5, 00:31:48.780 "message": "Input/output error" 00:31:48.780 } 00:31:48.780 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:48.780 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:48.780 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:48.780 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:48.780 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:48.780 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.780 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:31:48.780 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.780 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.780 10:53:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.780 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:31:48.780 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:31:48.780 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:48.780 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.780 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.780 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.780 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.780 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:48.780 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:48.780 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:48.780 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:48.780 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:48.780 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:48.780 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:48.780 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:48.780 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.781 request: 00:31:48.781 { 00:31:48.781 "name": "nvme0", 00:31:48.781 "trtype": "rdma", 00:31:48.781 "traddr": "192.168.100.8", 00:31:48.781 "adrfam": "ipv4", 00:31:48.781 "trsvcid": "4420", 00:31:48.781 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:48.781 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:48.781 "prchk_reftag": false, 00:31:48.781 "prchk_guard": false, 00:31:48.781 "hdgst": false, 00:31:48.781 "ddgst": false, 00:31:48.781 "dhchap_key": "key2", 00:31:48.781 "method": "bdev_nvme_attach_controller", 00:31:48.781 "req_id": 1 00:31:48.781 } 00:31:48.781 Got JSON-RPC error response 00:31:48.781 response: 00:31:48.781 { 00:31:48.781 "code": -5, 00:31:48.781 "message": "Input/output error" 00:31:48.781 } 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.781 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.041 request: 00:31:49.041 { 00:31:49.041 "name": "nvme0", 00:31:49.041 "trtype": "rdma", 00:31:49.041 "traddr": "192.168.100.8", 00:31:49.041 "adrfam": "ipv4", 00:31:49.041 "trsvcid": "4420", 00:31:49.041 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:49.041 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:49.041 "prchk_reftag": false, 00:31:49.041 "prchk_guard": false, 00:31:49.041 "hdgst": false, 00:31:49.041 "ddgst": false, 00:31:49.041 "dhchap_key": "key1", 00:31:49.041 "dhchap_ctrlr_key": "ckey2", 00:31:49.041 "method": "bdev_nvme_attach_controller", 00:31:49.041 "req_id": 1 00:31:49.041 } 00:31:49.041 Got JSON-RPC error response 00:31:49.041 response: 00:31:49.041 { 00:31:49.041 "code": -5, 00:31:49.041 "message": "Input/output error" 00:31:49.041 } 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:31:49.041 rmmod nvme_rdma 00:31:49.041 rmmod nvme_fabrics 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2396907 ']' 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2396907 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 2396907 ']' 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 2396907 00:31:49.041 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:31:49.042 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:49.042 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2396907 00:31:49.042 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:49.042 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:49.042 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2396907' 00:31:49.042 killing process with pid 2396907 00:31:49.042 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 2396907 00:31:49.042 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 2396907 00:31:49.302 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:49.302 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:31:49.302 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:49.302 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:49.302 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:31:49.302 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:31:49.302 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:31:49.302 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:49.302 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:49.302 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:49.302 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:49.302 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:49.302 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:31:49.302 10:53:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:31:51.841 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:51.841 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:51.841 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:51.841 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:51.841 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:51.841 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:51.841 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:51.841 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:51.841 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:51.841 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:51.841 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:51.841 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:51.841 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:51.841 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:51.841 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:51.841 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:53.768 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:31:53.768 10:54:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.oGg /tmp/spdk.key-null.MrS /tmp/spdk.key-sha256.QW0 /tmp/spdk.key-sha384.aU0 /tmp/spdk.key-sha512.U95 /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:31:53.768 10:54:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:31:55.677 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:31:55.677 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:55.677 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:31:55.677 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:31:55.677 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:31:55.677 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:31:55.677 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:31:55.677 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:31:55.937 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:31:55.937 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:31:55.937 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:31:55.937 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:31:55.937 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:31:55.937 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:31:55.937 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:31:55.937 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:31:55.937 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:31:55.937 00:31:55.937 real 0m51.932s 00:31:55.937 user 0m48.058s 00:31:55.937 sys 0m11.633s 00:31:55.937 10:54:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:55.937 10:54:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.937 ************************************ 00:31:55.937 END TEST nvmf_auth_host 00:31:55.937 ************************************ 00:31:55.937 10:54:03 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:31:55.937 10:54:03 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:31:55.937 10:54:03 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:31:55.937 10:54:03 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:31:55.937 10:54:03 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:31:55.937 10:54:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:55.937 10:54:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:55.937 10:54:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.937 ************************************ 00:31:55.937 START TEST nvmf_bdevperf 00:31:55.937 ************************************ 00:31:55.937 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:31:56.196 * Looking for test storage... 00:31:56.196 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:56.196 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:56.197 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:56.197 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:56.197 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:56.197 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:31:56.197 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:31:56.197 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:56.197 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:56.197 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:56.197 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:56.197 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.197 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:56.197 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:56.197 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:56.197 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:56.197 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:31:56.197 10:54:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:01.472 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:01.472 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:32:01.472 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:01.472 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:01.472 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:01.472 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:01.472 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:01.472 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:32:01.472 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:01.472 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:32:01.472 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:32:01.472 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:32:01.472 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:32:01.472 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:32:01.472 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:32:01.472 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:01.472 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:01.472 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:01.472 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:01.472 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:01.472 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:32:01.473 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:32:01.473 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:32:01.473 Found net devices under 0000:da:00.0: mlx_0_0 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:32:01.473 Found net devices under 0000:da:00.1: mlx_0_1 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # rdma_device_init 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # uname 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@63 -- # modprobe ib_core 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:32:01.473 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:32:01.733 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:01.733 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:32:01.733 altname enp218s0f0np0 00:32:01.733 altname ens818f0np0 00:32:01.733 inet 192.168.100.8/24 scope global mlx_0_0 00:32:01.733 valid_lft forever preferred_lft forever 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:32:01.733 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:01.733 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:32:01.733 altname enp218s0f1np1 00:32:01.733 altname ens818f1np1 00:32:01.733 inet 192.168.100.9/24 scope global mlx_0_1 00:32:01.733 valid_lft forever preferred_lft forever 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:32:01.733 10:54:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:32:01.733 192.168.100.9' 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:32:01.733 192.168.100.9' 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@457 -- # head -n 1 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:32:01.733 192.168.100.9' 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # tail -n +2 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # head -n 1 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:01.733 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2410578 00:32:01.734 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2410578 00:32:01.734 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:01.734 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2410578 ']' 00:32:01.734 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:01.734 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:01.734 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:01.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:01.734 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:01.734 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:01.734 [2024-07-24 10:54:09.119343] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:32:01.734 [2024-07-24 10:54:09.119382] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:01.734 EAL: No free 2048 kB hugepages reported on node 1 00:32:01.734 [2024-07-24 10:54:09.175564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:01.993 [2024-07-24 10:54:09.217758] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:01.993 [2024-07-24 10:54:09.217797] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:01.993 [2024-07-24 10:54:09.217805] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:01.993 [2024-07-24 10:54:09.217811] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:01.993 [2024-07-24 10:54:09.217817] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:01.993 [2024-07-24 10:54:09.217877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:01.993 [2024-07-24 10:54:09.217953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:01.993 [2024-07-24 10:54:09.217954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.993 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:01.993 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:32:01.993 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:01.993 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:01.993 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:01.993 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:01.993 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:32:01.993 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.993 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:01.993 [2024-07-24 10:54:09.378148] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1472c90/0x1477140) succeed. 00:32:01.993 [2024-07-24 10:54:09.387181] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14741e0/0x14b87d0) succeed. 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:02.252 Malloc0 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:02.252 [2024-07-24 10:54:09.528484] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:02.252 { 00:32:02.252 "params": { 00:32:02.252 "name": "Nvme$subsystem", 00:32:02.252 "trtype": "$TEST_TRANSPORT", 00:32:02.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:02.252 "adrfam": "ipv4", 00:32:02.252 "trsvcid": "$NVMF_PORT", 00:32:02.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:02.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:02.252 "hdgst": ${hdgst:-false}, 00:32:02.252 "ddgst": ${ddgst:-false} 00:32:02.252 }, 00:32:02.252 "method": "bdev_nvme_attach_controller" 00:32:02.252 } 00:32:02.252 EOF 00:32:02.252 )") 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:32:02.252 10:54:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:02.252 "params": { 00:32:02.252 "name": "Nvme1", 00:32:02.252 "trtype": "rdma", 00:32:02.252 "traddr": "192.168.100.8", 00:32:02.252 "adrfam": "ipv4", 00:32:02.252 "trsvcid": "4420", 00:32:02.252 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:02.252 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:02.252 "hdgst": false, 00:32:02.252 "ddgst": false 00:32:02.252 }, 00:32:02.252 "method": "bdev_nvme_attach_controller" 00:32:02.252 }' 00:32:02.252 [2024-07-24 10:54:09.574786] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:32:02.252 [2024-07-24 10:54:09.574827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2410753 ] 00:32:02.252 EAL: No free 2048 kB hugepages reported on node 1 00:32:02.252 [2024-07-24 10:54:09.629367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.252 [2024-07-24 10:54:09.670548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.511 Running I/O for 1 seconds... 00:32:03.447 00:32:03.447 Latency(us) 00:32:03.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:03.448 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:03.448 Verification LBA range: start 0x0 length 0x4000 00:32:03.448 Nvme1n1 : 1.00 17862.66 69.78 0.00 0.00 7125.04 187.25 12108.56 00:32:03.448 =================================================================================================================== 00:32:03.448 Total : 17862.66 69.78 0.00 0.00 7125.04 187.25 12108.56 00:32:03.710 10:54:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2411038 00:32:03.710 10:54:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:32:03.710 10:54:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:03.710 10:54:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:03.710 10:54:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:32:03.710 10:54:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:32:03.710 10:54:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:03.710 10:54:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:03.710 { 00:32:03.710 "params": { 00:32:03.710 "name": "Nvme$subsystem", 00:32:03.710 "trtype": "$TEST_TRANSPORT", 00:32:03.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:03.710 "adrfam": "ipv4", 00:32:03.710 "trsvcid": "$NVMF_PORT", 00:32:03.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:03.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:03.710 "hdgst": ${hdgst:-false}, 00:32:03.710 "ddgst": ${ddgst:-false} 00:32:03.710 }, 00:32:03.710 "method": "bdev_nvme_attach_controller" 00:32:03.710 } 00:32:03.710 EOF 00:32:03.710 )") 00:32:03.710 10:54:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:32:03.710 10:54:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:32:03.710 10:54:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:32:03.710 10:54:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:03.710 "params": { 00:32:03.710 "name": "Nvme1", 00:32:03.710 "trtype": "rdma", 00:32:03.710 "traddr": "192.168.100.8", 00:32:03.710 "adrfam": "ipv4", 00:32:03.710 "trsvcid": "4420", 00:32:03.710 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:03.710 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:03.710 "hdgst": false, 00:32:03.710 "ddgst": false 00:32:03.710 }, 00:32:03.710 "method": "bdev_nvme_attach_controller" 00:32:03.710 }' 00:32:03.710 [2024-07-24 10:54:11.080226] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:32:03.710 [2024-07-24 10:54:11.080275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2411038 ] 00:32:03.710 EAL: No free 2048 kB hugepages reported on node 1 00:32:03.710 [2024-07-24 10:54:11.135052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.968 [2024-07-24 10:54:11.173425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.968 Running I/O for 15 seconds... 00:32:06.609 10:54:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2410578 00:32:06.609 10:54:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:32:07.988 [2024-07-24 10:54:15.075943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.075985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.988 [2024-07-24 10:54:15.076000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.076023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.988 [2024-07-24 10:54:15.076032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.076038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.988 [2024-07-24 10:54:15.076046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.076053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.988 [2024-07-24 10:54:15.076060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.076066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.988 [2024-07-24 10:54:15.076074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.076080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.988 [2024-07-24 10:54:15.076088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.076094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.988 [2024-07-24 10:54:15.076101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.076110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.988 [2024-07-24 10:54:15.076118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:125312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.076124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.988 [2024-07-24 10:54:15.076138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.076144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.988 [2024-07-24 10:54:15.076152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.076158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.988 [2024-07-24 10:54:15.076166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.076171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.988 [2024-07-24 10:54:15.076179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.076186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.988 [2024-07-24 10:54:15.076194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.076200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.988 [2024-07-24 10:54:15.076208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.076213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.988 [2024-07-24 10:54:15.076221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.076227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.988 [2024-07-24 10:54:15.076234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.076240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.988 [2024-07-24 10:54:15.076248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.076254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.988 [2024-07-24 10:54:15.076261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.076267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.988 [2024-07-24 10:54:15.076275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.076281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.988 [2024-07-24 10:54:15.076289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.076295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.988 [2024-07-24 10:54:15.076303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.076309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.988 [2024-07-24 10:54:15.076317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x184400 00:32:07.988 [2024-07-24 10:54:15.076323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:125616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:125648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x184400 00:32:07.989 [2024-07-24 10:54:15.076839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.989 [2024-07-24 10:54:15.076847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.076854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.076862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.076868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.076876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:125728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.076882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.076889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.076895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.076903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.076909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.076917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.076924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.076931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.076939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.076949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.076956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.076964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.076971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.076978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.076985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.076992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.076999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.077013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.077027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.077041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.077055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.077070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.077083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.077097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.077112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.077126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:125872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.077140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.077154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.077170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.077184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.077198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.077212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.077226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.077240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.077253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x184400 00:32:07.990 [2024-07-24 10:54:15.077269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.990 [2024-07-24 10:54:15.077283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.990 [2024-07-24 10:54:15.077296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:125968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.990 [2024-07-24 10:54:15.077310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.990 [2024-07-24 10:54:15.077323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:125984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.990 [2024-07-24 10:54:15.077337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.990 [2024-07-24 10:54:15.077345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.990 [2024-07-24 10:54:15.077351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:126008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:126048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:126072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:126096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:126136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:126160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:126168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:126208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.077778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.077788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.086062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.086073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.086080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.086088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.991 [2024-07-24 10:54:15.086094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:56440 cdw0:9f99c000 sqhd:61ac p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.087873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:07.991 [2024-07-24 10:54:15.087884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:07.991 [2024-07-24 10:54:15.087890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126264 len:8 PRP1 0x0 PRP2 0x0 00:32:07.991 [2024-07-24 10:54:15.087896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.087932] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:32:07.991 [2024-07-24 10:54:15.087957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.991 [2024-07-24 10:54:15.087964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56440 cdw0:0 sqhd:936c p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.087971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.991 [2024-07-24 10:54:15.087978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56440 cdw0:0 sqhd:936c p:1 m:0 dnr:0 00:32:07.991 [2024-07-24 10:54:15.087984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.992 [2024-07-24 10:54:15.087991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56440 cdw0:0 sqhd:936c p:1 m:0 dnr:0 00:32:07.992 [2024-07-24 10:54:15.087997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.992 [2024-07-24 10:54:15.088003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56440 cdw0:0 sqhd:936c p:1 m:0 dnr:0 00:32:07.992 [2024-07-24 10:54:15.104559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:07.992 [2024-07-24 10:54:15.104608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.992 [2024-07-24 10:54:15.104631] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:07.992 [2024-07-24 10:54:15.107734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.992 [2024-07-24 10:54:15.110587] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:07.992 [2024-07-24 10:54:15.110603] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:07.992 [2024-07-24 10:54:15.110609] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:32:08.929 [2024-07-24 10:54:16.114552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:08.929 [2024-07-24 10:54:16.114602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.929 [2024-07-24 10:54:16.114830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.929 [2024-07-24 10:54:16.114837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.929 [2024-07-24 10:54:16.114844] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:32:08.929 [2024-07-24 10:54:16.117219] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:08.929 [2024-07-24 10:54:16.117536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.929 [2024-07-24 10:54:16.130049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.929 [2024-07-24 10:54:16.132802] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:08.929 [2024-07-24 10:54:16.132819] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:08.929 [2024-07-24 10:54:16.132824] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:32:09.867 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2410578 Killed "${NVMF_APP[@]}" "$@" 00:32:09.867 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:32:09.867 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:09.867 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:09.867 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:09.867 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:09.867 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2412069 00:32:09.867 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2412069 00:32:09.867 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:09.867 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2412069 ']' 00:32:09.867 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.867 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:09.867 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.867 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:09.867 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:09.867 [2024-07-24 10:54:17.098522] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:32:09.867 [2024-07-24 10:54:17.098564] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.867 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.867 [2024-07-24 10:54:17.136732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:09.867 [2024-07-24 10:54:17.136751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:09.867 [2024-07-24 10:54:17.136925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:09.867 [2024-07-24 10:54:17.136933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:09.867 [2024-07-24 10:54:17.136944] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:32:09.867 [2024-07-24 10:54:17.139701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:09.867 [2024-07-24 10:54:17.142955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:09.868 [2024-07-24 10:54:17.145414] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:09.868 [2024-07-24 10:54:17.145432] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:09.868 [2024-07-24 10:54:17.145438] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:32:09.868 [2024-07-24 10:54:17.155250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:09.868 [2024-07-24 10:54:17.194858] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.868 [2024-07-24 10:54:17.194898] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.868 [2024-07-24 10:54:17.194906] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:09.868 [2024-07-24 10:54:17.194911] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:09.868 [2024-07-24 10:54:17.194916] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.868 [2024-07-24 10:54:17.194955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:09.868 [2024-07-24 10:54:17.195046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:09.868 [2024-07-24 10:54:17.195047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.868 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:09.868 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:32:09.868 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:09.868 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:09.868 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:10.127 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:10.127 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:32:10.127 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.127 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:10.127 [2024-07-24 10:54:17.349092] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfbfc90/0xfc4140) succeed. 00:32:10.127 [2024-07-24 10:54:17.358290] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfc11e0/0x10057d0) succeed. 00:32:10.127 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.127 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:10.127 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.127 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:10.127 Malloc0 00:32:10.127 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.127 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:10.127 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.127 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:10.127 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.127 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:10.127 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.127 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:10.127 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.127 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:10.127 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.127 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:10.127 [2024-07-24 10:54:17.507623] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:10.127 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.127 10:54:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2411038 00:32:10.696 [2024-07-24 10:54:18.149300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:10.696 [2024-07-24 10:54:18.149326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.696 [2024-07-24 10:54:18.149507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.696 [2024-07-24 10:54:18.149516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.696 [2024-07-24 10:54:18.149523] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:32:10.955 [2024-07-24 10:54:18.152273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.955 [2024-07-24 10:54:18.160229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.955 [2024-07-24 10:54:18.201927] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:19.074 00:32:19.074 Latency(us) 00:32:19.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.074 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:19.074 Verification LBA range: start 0x0 length 0x4000 00:32:19.074 Nvme1n1 : 15.01 13030.14 50.90 10351.62 0.00 5453.51 372.54 1070546.16 00:32:19.074 =================================================================================================================== 00:32:19.074 Total : 13030.14 50.90 10351.62 0.00 5453.51 372.54 1070546.16 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:32:19.333 rmmod nvme_rdma 00:32:19.333 rmmod nvme_fabrics 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2412069 ']' 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2412069 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 2412069 ']' 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 2412069 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2412069 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2412069' 00:32:19.333 killing process with pid 2412069 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 2412069 00:32:19.333 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 2412069 00:32:19.592 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:19.592 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:32:19.592 00:32:19.592 real 0m23.586s 00:32:19.592 user 1m1.735s 00:32:19.592 sys 0m5.272s 00:32:19.592 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:19.592 10:54:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:19.592 ************************************ 00:32:19.592 END TEST nvmf_bdevperf 00:32:19.592 ************************************ 00:32:19.592 10:54:26 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:32:19.592 10:54:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:19.592 10:54:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:19.592 10:54:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.592 ************************************ 00:32:19.592 START TEST nvmf_target_disconnect 00:32:19.592 ************************************ 00:32:19.592 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:32:19.851 * Looking for test storage... 00:32:19.851 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:32:19.851 10:54:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:32:25.122 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:32:25.122 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:32:25.122 Found net devices under 0000:da:00.0: mlx_0_0 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.122 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:32:25.123 Found net devices under 0000:da:00.1: mlx_0_1 00:32:25.123 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.123 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:25.123 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:32:25.123 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:25.123 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:32:25.123 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:32:25.123 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:32:25.123 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:32:25.123 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # uname 00:32:25.123 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:32:25.123 10:54:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:32:25.123 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:25.123 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:32:25.123 altname enp218s0f0np0 00:32:25.123 altname ens818f0np0 00:32:25.123 inet 192.168.100.8/24 scope global mlx_0_0 00:32:25.123 valid_lft forever preferred_lft forever 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:32:25.123 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:25.123 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:32:25.123 altname enp218s0f1np1 00:32:25.123 altname ens818f1np1 00:32:25.123 inet 192.168.100.9/24 scope global mlx_0_1 00:32:25.123 valid_lft forever preferred_lft forever 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:32:25.123 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:32:25.124 192.168.100.9' 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:32:25.124 192.168.100.9' 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:32:25.124 192.168.100.9' 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:25.124 ************************************ 00:32:25.124 START TEST nvmf_target_disconnect_tc1 00:32:25.124 ************************************ 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:32:25.124 10:54:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:25.124 EAL: No free 2048 kB hugepages reported on node 1 00:32:25.124 [2024-07-24 10:54:32.297215] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:25.124 [2024-07-24 10:54:32.297251] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:25.124 [2024-07-24 10:54:32.297258] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:32:26.062 [2024-07-24 10:54:33.301285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:26.062 [2024-07-24 10:54:33.301352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:26.062 [2024-07-24 10:54:33.301361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:32:26.062 [2024-07-24 10:54:33.301380] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:26.062 [2024-07-24 10:54:33.301386] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:32:26.062 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:32:26.062 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:26.062 Initializing NVMe Controllers 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:26.062 00:32:26.062 real 0m1.113s 00:32:26.062 user 0m0.953s 00:32:26.062 sys 0m0.149s 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:26.062 ************************************ 00:32:26.062 END TEST nvmf_target_disconnect_tc1 00:32:26.062 ************************************ 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:26.062 ************************************ 00:32:26.062 START TEST nvmf_target_disconnect_tc2 00:32:26.062 ************************************ 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2416776 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2416776 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2416776 ']' 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:26.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:26.062 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:26.062 [2024-07-24 10:54:33.408347] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:32:26.062 [2024-07-24 10:54:33.408385] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:26.062 EAL: No free 2048 kB hugepages reported on node 1 00:32:26.062 [2024-07-24 10:54:33.475331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:26.321 [2024-07-24 10:54:33.515925] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:26.321 [2024-07-24 10:54:33.515961] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:26.321 [2024-07-24 10:54:33.515969] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:26.321 [2024-07-24 10:54:33.515974] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:26.321 [2024-07-24 10:54:33.515980] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:26.321 [2024-07-24 10:54:33.516090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:32:26.321 [2024-07-24 10:54:33.516216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:32:26.321 [2024-07-24 10:54:33.516253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:32:26.321 [2024-07-24 10:54:33.516255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:32:26.321 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:26.321 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:32:26.321 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:26.321 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:26.321 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:26.321 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:26.321 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:26.321 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.321 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:26.321 Malloc0 00:32:26.321 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.321 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:32:26.321 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.321 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:26.321 [2024-07-24 10:54:33.705407] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x118dfb0/0x1199c80) succeed. 00:32:26.321 [2024-07-24 10:54:33.714699] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x118f5a0/0x1239d80) succeed. 00:32:26.581 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.581 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:26.581 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.581 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:26.581 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.581 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:26.581 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.581 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:26.581 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.581 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:26.581 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.581 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:26.581 [2024-07-24 10:54:33.854749] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:26.581 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.581 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:32:26.581 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.581 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:26.581 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.581 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2416804 00:32:26.581 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:32:26.581 10:54:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:26.581 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.487 10:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2416776 00:32:28.487 10:54:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:32:29.864 Write completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Write completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Write completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Write completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Write completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Write completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Write completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Write completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Write completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.864 Read completed with error (sct=0, sc=8) 00:32:29.864 starting I/O failed 00:32:29.865 [2024-07-24 10:54:37.034818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:30.432 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2416776 Killed "${NVMF_APP[@]}" "$@" 00:32:30.433 10:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:32:30.433 10:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:30.433 10:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:30.433 10:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:30.433 10:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:30.433 10:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:30.433 10:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2417491 00:32:30.433 10:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2417491 00:32:30.433 10:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2417491 ']' 00:32:30.433 10:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.433 10:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:30.433 10:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.433 10:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:30.433 10:54:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:30.692 [2024-07-24 10:54:37.925560] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:32:30.692 [2024-07-24 10:54:37.925611] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:30.692 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.692 [2024-07-24 10:54:37.990874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:30.692 [2024-07-24 10:54:38.032091] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:30.692 [2024-07-24 10:54:38.032130] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:30.692 [2024-07-24 10:54:38.032137] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:30.692 [2024-07-24 10:54:38.032143] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:30.692 [2024-07-24 10:54:38.032148] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:30.692 [2024-07-24 10:54:38.032273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:32:30.692 [2024-07-24 10:54:38.032382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:32:30.692 [2024-07-24 10:54:38.032489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:32:30.692 [2024-07-24 10:54:38.032512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:32:30.692 Write completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Read completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Write completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Read completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Write completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Read completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Read completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Read completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Write completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Write completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Write completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Write completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Read completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Write completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Read completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Read completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Write completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Read completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Read completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Read completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Write completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Write completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Read completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Write completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Write completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Write completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Read completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Read completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Read completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Read completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Read completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.692 Read completed with error (sct=0, sc=8) 00:32:30.692 starting I/O failed 00:32:30.693 [2024-07-24 10:54:38.039950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:30.693 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:30.693 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:32:30.693 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:30.693 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:30.693 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:30.952 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:30.952 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:30.952 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.952 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:30.952 Malloc0 00:32:30.952 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.952 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:32:30.952 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.952 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:30.952 [2024-07-24 10:54:38.214250] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xcaefb0/0xcbac80) succeed. 00:32:30.952 [2024-07-24 10:54:38.223608] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xcb05a0/0xd5ad80) succeed. 00:32:30.952 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.952 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:30.952 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.952 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:30.952 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.952 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:30.952 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.952 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:30.953 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.953 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:30.953 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.953 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:30.953 [2024-07-24 10:54:38.365049] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:30.953 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.953 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:32:30.953 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.953 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:30.953 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.953 10:54:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2416804 00:32:31.890 Write completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Read completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Read completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Read completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Read completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Read completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Write completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Write completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Read completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Write completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Read completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Write completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Write completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Write completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Read completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Read completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Write completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Write completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Read completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Write completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Write completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Read completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Write completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Read completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Read completed with error (sct=0, sc=8) 00:32:31.890 starting I/O failed 00:32:31.890 Read completed with error (sct=0, sc=8) 00:32:31.891 starting I/O failed 00:32:31.891 Write completed with error (sct=0, sc=8) 00:32:31.891 starting I/O failed 00:32:31.891 Read completed with error (sct=0, sc=8) 00:32:31.891 starting I/O failed 00:32:31.891 Write completed with error (sct=0, sc=8) 00:32:31.891 starting I/O failed 00:32:31.891 Write completed with error (sct=0, sc=8) 00:32:31.891 starting I/O failed 00:32:31.891 Write completed with error (sct=0, sc=8) 00:32:31.891 starting I/O failed 00:32:31.891 Read completed with error (sct=0, sc=8) 00:32:31.891 starting I/O failed 00:32:31.891 [2024-07-24 10:54:39.045117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:31.891 [2024-07-24 10:54:39.050696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:31.891 [2024-07-24 10:54:39.050747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:31.891 [2024-07-24 10:54:39.050766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:31.891 [2024-07-24 10:54:39.050774] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:31.891 [2024-07-24 10:54:39.050780] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:31.891 [2024-07-24 10:54:39.061095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:31.891 qpair failed and we were unable to recover it. 00:32:31.891 [2024-07-24 10:54:39.070817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:31.891 [2024-07-24 10:54:39.070856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:31.891 [2024-07-24 10:54:39.070871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:31.891 [2024-07-24 10:54:39.070877] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:31.891 [2024-07-24 10:54:39.070884] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:31.891 [2024-07-24 10:54:39.081199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:31.891 qpair failed and we were unable to recover it. 00:32:31.891 [2024-07-24 10:54:39.090995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:31.891 [2024-07-24 10:54:39.091033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:31.891 [2024-07-24 10:54:39.091052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:31.891 [2024-07-24 10:54:39.091058] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:31.891 [2024-07-24 10:54:39.091064] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:31.891 [2024-07-24 10:54:39.101399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:31.891 qpair failed and we were unable to recover it. 00:32:31.891 [2024-07-24 10:54:39.110896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:31.891 [2024-07-24 10:54:39.110939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:31.891 [2024-07-24 10:54:39.110954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:31.891 [2024-07-24 10:54:39.110961] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:31.891 [2024-07-24 10:54:39.110967] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:31.891 [2024-07-24 10:54:39.121229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:31.891 qpair failed and we were unable to recover it. 00:32:31.891 [2024-07-24 10:54:39.130918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:31.891 [2024-07-24 10:54:39.130957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:31.891 [2024-07-24 10:54:39.130972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:31.891 [2024-07-24 10:54:39.130979] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:31.891 [2024-07-24 10:54:39.130985] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:31.891 [2024-07-24 10:54:39.141323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:31.891 qpair failed and we were unable to recover it. 00:32:31.891 [2024-07-24 10:54:39.150995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:31.891 [2024-07-24 10:54:39.151032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:31.891 [2024-07-24 10:54:39.151047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:31.891 [2024-07-24 10:54:39.151053] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:31.891 [2024-07-24 10:54:39.151059] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:31.891 [2024-07-24 10:54:39.161399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:31.891 qpair failed and we were unable to recover it. 00:32:31.891 [2024-07-24 10:54:39.171084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:31.891 [2024-07-24 10:54:39.171116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:31.891 [2024-07-24 10:54:39.171131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:31.891 [2024-07-24 10:54:39.171138] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:31.891 [2024-07-24 10:54:39.171151] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:31.891 [2024-07-24 10:54:39.181335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:31.891 qpair failed and we were unable to recover it. 00:32:31.891 [2024-07-24 10:54:39.191176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:31.891 [2024-07-24 10:54:39.191217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:31.891 [2024-07-24 10:54:39.191231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:31.891 [2024-07-24 10:54:39.191238] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:31.891 [2024-07-24 10:54:39.191244] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:31.891 [2024-07-24 10:54:39.201636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:31.891 qpair failed and we were unable to recover it. 00:32:31.891 [2024-07-24 10:54:39.211090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:31.891 [2024-07-24 10:54:39.211132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:31.891 [2024-07-24 10:54:39.211147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:31.891 [2024-07-24 10:54:39.211154] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:31.891 [2024-07-24 10:54:39.211160] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:31.891 [2024-07-24 10:54:39.221488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:31.891 qpair failed and we were unable to recover it. 00:32:31.891 [2024-07-24 10:54:39.231171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:31.891 [2024-07-24 10:54:39.231205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:31.891 [2024-07-24 10:54:39.231220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:31.891 [2024-07-24 10:54:39.231226] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:31.891 [2024-07-24 10:54:39.231232] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:31.891 [2024-07-24 10:54:39.241481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:31.891 qpair failed and we were unable to recover it. 00:32:31.891 [2024-07-24 10:54:39.251387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:31.891 [2024-07-24 10:54:39.251420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:31.891 [2024-07-24 10:54:39.251435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:31.891 [2024-07-24 10:54:39.251441] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:31.891 [2024-07-24 10:54:39.251447] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:31.891 [2024-07-24 10:54:39.261555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:31.891 qpair failed and we were unable to recover it. 00:32:31.892 [2024-07-24 10:54:39.271335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:31.892 [2024-07-24 10:54:39.271374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:31.892 [2024-07-24 10:54:39.271389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:31.892 [2024-07-24 10:54:39.271396] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:31.892 [2024-07-24 10:54:39.271401] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:31.892 [2024-07-24 10:54:39.281606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:31.892 qpair failed and we were unable to recover it. 00:32:31.892 [2024-07-24 10:54:39.291467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:31.892 [2024-07-24 10:54:39.291520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:31.892 [2024-07-24 10:54:39.291535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:31.892 [2024-07-24 10:54:39.291541] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:31.892 [2024-07-24 10:54:39.291547] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:31.892 [2024-07-24 10:54:39.301736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:31.892 qpair failed and we were unable to recover it. 00:32:31.892 [2024-07-24 10:54:39.311499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:31.892 [2024-07-24 10:54:39.311536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:31.892 [2024-07-24 10:54:39.311551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:31.892 [2024-07-24 10:54:39.311557] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:31.892 [2024-07-24 10:54:39.311563] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:31.892 [2024-07-24 10:54:39.321931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:31.892 qpair failed and we were unable to recover it. 00:32:31.892 [2024-07-24 10:54:39.331554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:31.892 [2024-07-24 10:54:39.331591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:31.892 [2024-07-24 10:54:39.331605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:31.892 [2024-07-24 10:54:39.331611] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:31.892 [2024-07-24 10:54:39.331617] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:31.892 [2024-07-24 10:54:39.341943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:31.892 qpair failed and we were unable to recover it. 00:32:32.151 [2024-07-24 10:54:39.351566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.151 [2024-07-24 10:54:39.351606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.151 [2024-07-24 10:54:39.351623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.151 [2024-07-24 10:54:39.351633] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.151 [2024-07-24 10:54:39.351639] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.151 [2024-07-24 10:54:39.362047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.151 qpair failed and we were unable to recover it. 00:32:32.151 [2024-07-24 10:54:39.371690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.151 [2024-07-24 10:54:39.371729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.151 [2024-07-24 10:54:39.371744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.151 [2024-07-24 10:54:39.371750] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.151 [2024-07-24 10:54:39.371756] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.151 [2024-07-24 10:54:39.381842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.151 qpair failed and we were unable to recover it. 00:32:32.151 [2024-07-24 10:54:39.391678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.151 [2024-07-24 10:54:39.391712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.151 [2024-07-24 10:54:39.391726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.151 [2024-07-24 10:54:39.391733] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.151 [2024-07-24 10:54:39.391739] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.151 [2024-07-24 10:54:39.402180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.151 qpair failed and we were unable to recover it. 00:32:32.151 [2024-07-24 10:54:39.411824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.151 [2024-07-24 10:54:39.411861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.151 [2024-07-24 10:54:39.411875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.151 [2024-07-24 10:54:39.411882] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.151 [2024-07-24 10:54:39.411887] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.151 [2024-07-24 10:54:39.422163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.151 qpair failed and we were unable to recover it. 00:32:32.151 [2024-07-24 10:54:39.431932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.151 [2024-07-24 10:54:39.431969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.151 [2024-07-24 10:54:39.431984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.151 [2024-07-24 10:54:39.431990] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.151 [2024-07-24 10:54:39.431996] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.151 [2024-07-24 10:54:39.442081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.151 qpair failed and we were unable to recover it. 00:32:32.151 [2024-07-24 10:54:39.452079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.151 [2024-07-24 10:54:39.452118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.151 [2024-07-24 10:54:39.452132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.151 [2024-07-24 10:54:39.452139] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.152 [2024-07-24 10:54:39.452145] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.152 [2024-07-24 10:54:39.462403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.152 qpair failed and we were unable to recover it. 00:32:32.152 [2024-07-24 10:54:39.472038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.152 [2024-07-24 10:54:39.472075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.152 [2024-07-24 10:54:39.472089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.152 [2024-07-24 10:54:39.472095] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.152 [2024-07-24 10:54:39.472101] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.152 [2024-07-24 10:54:39.482410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.152 qpair failed and we were unable to recover it. 00:32:32.152 [2024-07-24 10:54:39.492008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.152 [2024-07-24 10:54:39.492040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.152 [2024-07-24 10:54:39.492055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.152 [2024-07-24 10:54:39.492061] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.152 [2024-07-24 10:54:39.492067] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.152 [2024-07-24 10:54:39.502450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.152 qpair failed and we were unable to recover it. 00:32:32.152 [2024-07-24 10:54:39.512052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.152 [2024-07-24 10:54:39.512090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.152 [2024-07-24 10:54:39.512104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.152 [2024-07-24 10:54:39.512110] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.152 [2024-07-24 10:54:39.512116] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.152 [2024-07-24 10:54:39.522414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.152 qpair failed and we were unable to recover it. 00:32:32.152 [2024-07-24 10:54:39.532246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.152 [2024-07-24 10:54:39.532285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.152 [2024-07-24 10:54:39.532303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.152 [2024-07-24 10:54:39.532309] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.152 [2024-07-24 10:54:39.532315] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.152 [2024-07-24 10:54:39.542687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.152 qpair failed and we were unable to recover it. 00:32:32.152 [2024-07-24 10:54:39.552360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.152 [2024-07-24 10:54:39.552395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.152 [2024-07-24 10:54:39.552409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.152 [2024-07-24 10:54:39.552415] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.152 [2024-07-24 10:54:39.552421] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.152 [2024-07-24 10:54:39.562703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.152 qpair failed and we were unable to recover it. 00:32:32.152 [2024-07-24 10:54:39.572386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.152 [2024-07-24 10:54:39.572423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.152 [2024-07-24 10:54:39.572437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.152 [2024-07-24 10:54:39.572443] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.152 [2024-07-24 10:54:39.572449] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.152 [2024-07-24 10:54:39.582734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.152 qpair failed and we were unable to recover it. 00:32:32.152 [2024-07-24 10:54:39.592316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.152 [2024-07-24 10:54:39.592355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.152 [2024-07-24 10:54:39.592369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.152 [2024-07-24 10:54:39.592376] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.152 [2024-07-24 10:54:39.592381] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.152 [2024-07-24 10:54:39.603038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.152 qpair failed and we were unable to recover it. 00:32:32.411 [2024-07-24 10:54:39.612430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.411 [2024-07-24 10:54:39.612473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.411 [2024-07-24 10:54:39.612486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.411 [2024-07-24 10:54:39.612500] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.411 [2024-07-24 10:54:39.612510] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.411 [2024-07-24 10:54:39.622944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.411 qpair failed and we were unable to recover it. 00:32:32.411 [2024-07-24 10:54:39.632402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.411 [2024-07-24 10:54:39.632436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.411 [2024-07-24 10:54:39.632451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.411 [2024-07-24 10:54:39.632457] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.411 [2024-07-24 10:54:39.632463] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.411 [2024-07-24 10:54:39.642821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.411 qpair failed and we were unable to recover it. 00:32:32.411 [2024-07-24 10:54:39.652669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.411 [2024-07-24 10:54:39.652706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.411 [2024-07-24 10:54:39.652720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.411 [2024-07-24 10:54:39.652726] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.411 [2024-07-24 10:54:39.652732] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.411 [2024-07-24 10:54:39.662911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.411 qpair failed and we were unable to recover it. 00:32:32.411 [2024-07-24 10:54:39.672628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.411 [2024-07-24 10:54:39.672665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.411 [2024-07-24 10:54:39.672679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.411 [2024-07-24 10:54:39.672686] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.411 [2024-07-24 10:54:39.672692] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.411 [2024-07-24 10:54:39.682902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.411 qpair failed and we were unable to recover it. 00:32:32.411 [2024-07-24 10:54:39.692660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.411 [2024-07-24 10:54:39.692700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.411 [2024-07-24 10:54:39.692714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.411 [2024-07-24 10:54:39.692721] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.411 [2024-07-24 10:54:39.692726] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.411 [2024-07-24 10:54:39.703205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.411 qpair failed and we were unable to recover it. 00:32:32.411 [2024-07-24 10:54:39.712829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.411 [2024-07-24 10:54:39.712866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.411 [2024-07-24 10:54:39.712880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.411 [2024-07-24 10:54:39.712886] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.411 [2024-07-24 10:54:39.712892] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.411 [2024-07-24 10:54:39.723054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.411 qpair failed and we were unable to recover it. 00:32:32.411 [2024-07-24 10:54:39.732841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.411 [2024-07-24 10:54:39.732879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.411 [2024-07-24 10:54:39.732892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.411 [2024-07-24 10:54:39.732899] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.411 [2024-07-24 10:54:39.732904] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.411 [2024-07-24 10:54:39.743177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.411 qpair failed and we were unable to recover it. 00:32:32.411 [2024-07-24 10:54:39.752888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.411 [2024-07-24 10:54:39.752925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.411 [2024-07-24 10:54:39.752939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.411 [2024-07-24 10:54:39.752945] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.411 [2024-07-24 10:54:39.752951] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.411 [2024-07-24 10:54:39.763313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.411 qpair failed and we were unable to recover it. 00:32:32.411 [2024-07-24 10:54:39.772986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.411 [2024-07-24 10:54:39.773025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.411 [2024-07-24 10:54:39.773038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.411 [2024-07-24 10:54:39.773044] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.411 [2024-07-24 10:54:39.773050] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.411 [2024-07-24 10:54:39.783371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.411 qpair failed and we were unable to recover it. 00:32:32.411 [2024-07-24 10:54:39.793148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.411 [2024-07-24 10:54:39.793181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.411 [2024-07-24 10:54:39.793195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.411 [2024-07-24 10:54:39.793206] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.412 [2024-07-24 10:54:39.793211] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.412 [2024-07-24 10:54:39.803559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.412 qpair failed and we were unable to recover it. 00:32:32.412 [2024-07-24 10:54:39.813112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.412 [2024-07-24 10:54:39.813147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.412 [2024-07-24 10:54:39.813161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.412 [2024-07-24 10:54:39.813168] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.412 [2024-07-24 10:54:39.813174] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.412 [2024-07-24 10:54:39.823556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.412 qpair failed and we were unable to recover it. 00:32:32.412 [2024-07-24 10:54:39.833274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.412 [2024-07-24 10:54:39.833314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.412 [2024-07-24 10:54:39.833328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.412 [2024-07-24 10:54:39.833336] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.412 [2024-07-24 10:54:39.833342] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.412 [2024-07-24 10:54:39.843573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.412 qpair failed and we were unable to recover it. 00:32:32.412 [2024-07-24 10:54:39.853186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.412 [2024-07-24 10:54:39.853227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.412 [2024-07-24 10:54:39.853241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.412 [2024-07-24 10:54:39.853248] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.412 [2024-07-24 10:54:39.853254] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.412 [2024-07-24 10:54:39.863383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.412 qpair failed and we were unable to recover it. 00:32:32.671 [2024-07-24 10:54:39.873132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.671 [2024-07-24 10:54:39.873163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.671 [2024-07-24 10:54:39.873178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.671 [2024-07-24 10:54:39.873184] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.671 [2024-07-24 10:54:39.873190] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.671 [2024-07-24 10:54:39.883613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.671 qpair failed and we were unable to recover it. 00:32:32.671 [2024-07-24 10:54:39.893200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.671 [2024-07-24 10:54:39.893241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.671 [2024-07-24 10:54:39.893255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.671 [2024-07-24 10:54:39.893262] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.671 [2024-07-24 10:54:39.893268] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.671 [2024-07-24 10:54:39.903524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.671 qpair failed and we were unable to recover it. 00:32:32.671 [2024-07-24 10:54:39.913283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.671 [2024-07-24 10:54:39.913321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.671 [2024-07-24 10:54:39.913335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.671 [2024-07-24 10:54:39.913342] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.671 [2024-07-24 10:54:39.913348] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.671 [2024-07-24 10:54:39.923700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.671 qpair failed and we were unable to recover it. 00:32:32.671 [2024-07-24 10:54:39.933320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.671 [2024-07-24 10:54:39.933357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.671 [2024-07-24 10:54:39.933373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.671 [2024-07-24 10:54:39.933379] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.671 [2024-07-24 10:54:39.933385] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.671 [2024-07-24 10:54:39.943810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.671 qpair failed and we were unable to recover it. 00:32:32.671 [2024-07-24 10:54:39.953361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.671 [2024-07-24 10:54:39.953404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.671 [2024-07-24 10:54:39.953418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.671 [2024-07-24 10:54:39.953425] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.671 [2024-07-24 10:54:39.953430] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.671 [2024-07-24 10:54:39.963866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.671 qpair failed and we were unable to recover it. 00:32:32.671 [2024-07-24 10:54:39.973478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.671 [2024-07-24 10:54:39.973514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.671 [2024-07-24 10:54:39.973532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.671 [2024-07-24 10:54:39.973538] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.671 [2024-07-24 10:54:39.973544] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.671 [2024-07-24 10:54:39.983879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.671 qpair failed and we were unable to recover it. 00:32:32.671 [2024-07-24 10:54:39.993669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.671 [2024-07-24 10:54:39.993707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.671 [2024-07-24 10:54:39.993721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.671 [2024-07-24 10:54:39.993727] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.671 [2024-07-24 10:54:39.993733] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.671 [2024-07-24 10:54:40.004112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.671 qpair failed and we were unable to recover it. 00:32:32.671 [2024-07-24 10:54:40.013607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.671 [2024-07-24 10:54:40.013668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.671 [2024-07-24 10:54:40.013684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.671 [2024-07-24 10:54:40.013691] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.671 [2024-07-24 10:54:40.013698] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.671 [2024-07-24 10:54:40.024041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.671 qpair failed and we were unable to recover it. 00:32:32.671 [2024-07-24 10:54:40.033681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.671 [2024-07-24 10:54:40.033720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.671 [2024-07-24 10:54:40.033738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.671 [2024-07-24 10:54:40.033746] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.671 [2024-07-24 10:54:40.033754] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.671 [2024-07-24 10:54:40.044140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.671 qpair failed and we were unable to recover it. 00:32:32.671 [2024-07-24 10:54:40.053822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.672 [2024-07-24 10:54:40.053861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.672 [2024-07-24 10:54:40.053877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.672 [2024-07-24 10:54:40.053884] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.672 [2024-07-24 10:54:40.053893] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.672 [2024-07-24 10:54:40.064250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.672 qpair failed and we were unable to recover it. 00:32:32.672 [2024-07-24 10:54:40.074996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.672 [2024-07-24 10:54:40.075046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.672 [2024-07-24 10:54:40.075065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.672 [2024-07-24 10:54:40.075074] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.672 [2024-07-24 10:54:40.075081] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.672 [2024-07-24 10:54:40.084420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.672 qpair failed and we were unable to recover it. 00:32:32.672 [2024-07-24 10:54:40.093960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.672 [2024-07-24 10:54:40.093998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.672 [2024-07-24 10:54:40.094017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.672 [2024-07-24 10:54:40.094024] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.672 [2024-07-24 10:54:40.094030] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.672 [2024-07-24 10:54:40.104399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.672 qpair failed and we were unable to recover it. 00:32:32.672 [2024-07-24 10:54:40.113997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.672 [2024-07-24 10:54:40.114039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.672 [2024-07-24 10:54:40.114056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.672 [2024-07-24 10:54:40.114062] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.672 [2024-07-24 10:54:40.114069] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.931 [2024-07-24 10:54:40.124328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.931 qpair failed and we were unable to recover it. 00:32:32.931 [2024-07-24 10:54:40.133989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.931 [2024-07-24 10:54:40.134026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.931 [2024-07-24 10:54:40.134041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.931 [2024-07-24 10:54:40.134047] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.931 [2024-07-24 10:54:40.134053] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.931 [2024-07-24 10:54:40.144383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.931 qpair failed and we were unable to recover it. 00:32:32.931 [2024-07-24 10:54:40.154091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.931 [2024-07-24 10:54:40.154130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.931 [2024-07-24 10:54:40.154144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.931 [2024-07-24 10:54:40.154150] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.931 [2024-07-24 10:54:40.154156] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.931 [2024-07-24 10:54:40.164313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.931 qpair failed and we were unable to recover it. 00:32:32.931 [2024-07-24 10:54:40.174186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.931 [2024-07-24 10:54:40.174225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.931 [2024-07-24 10:54:40.174239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.931 [2024-07-24 10:54:40.174245] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.931 [2024-07-24 10:54:40.174251] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.931 [2024-07-24 10:54:40.184592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.931 qpair failed and we were unable to recover it. 00:32:32.931 [2024-07-24 10:54:40.194233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.931 [2024-07-24 10:54:40.194275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.931 [2024-07-24 10:54:40.194289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.932 [2024-07-24 10:54:40.194295] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.932 [2024-07-24 10:54:40.194301] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.932 [2024-07-24 10:54:40.204645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.932 qpair failed and we were unable to recover it. 00:32:32.932 [2024-07-24 10:54:40.214234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.932 [2024-07-24 10:54:40.214267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.932 [2024-07-24 10:54:40.214282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.932 [2024-07-24 10:54:40.214288] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.932 [2024-07-24 10:54:40.214294] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.932 [2024-07-24 10:54:40.224583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.932 qpair failed and we were unable to recover it. 00:32:32.932 [2024-07-24 10:54:40.234378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.932 [2024-07-24 10:54:40.234419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.932 [2024-07-24 10:54:40.234434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.932 [2024-07-24 10:54:40.234444] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.932 [2024-07-24 10:54:40.234449] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.932 [2024-07-24 10:54:40.244717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.932 qpair failed and we were unable to recover it. 00:32:32.932 [2024-07-24 10:54:40.254353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.932 [2024-07-24 10:54:40.254396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.932 [2024-07-24 10:54:40.254411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.932 [2024-07-24 10:54:40.254417] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.932 [2024-07-24 10:54:40.254423] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.932 [2024-07-24 10:54:40.264748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.932 qpair failed and we were unable to recover it. 00:32:32.932 [2024-07-24 10:54:40.274413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.932 [2024-07-24 10:54:40.274447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.932 [2024-07-24 10:54:40.274461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.932 [2024-07-24 10:54:40.274467] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.932 [2024-07-24 10:54:40.274473] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.932 [2024-07-24 10:54:40.284798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.932 qpair failed and we were unable to recover it. 00:32:32.932 [2024-07-24 10:54:40.294424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.932 [2024-07-24 10:54:40.294462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.932 [2024-07-24 10:54:40.294476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.932 [2024-07-24 10:54:40.294483] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.932 [2024-07-24 10:54:40.294488] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.932 [2024-07-24 10:54:40.304875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.932 qpair failed and we were unable to recover it. 00:32:32.932 [2024-07-24 10:54:40.314523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.932 [2024-07-24 10:54:40.314565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.932 [2024-07-24 10:54:40.314580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.932 [2024-07-24 10:54:40.314587] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.932 [2024-07-24 10:54:40.314592] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.932 [2024-07-24 10:54:40.324933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.932 qpair failed and we were unable to recover it. 00:32:32.932 [2024-07-24 10:54:40.334720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.932 [2024-07-24 10:54:40.334758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.932 [2024-07-24 10:54:40.334773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.932 [2024-07-24 10:54:40.334779] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.932 [2024-07-24 10:54:40.334785] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.932 [2024-07-24 10:54:40.345204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.932 qpair failed and we were unable to recover it. 00:32:32.932 [2024-07-24 10:54:40.354894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.932 [2024-07-24 10:54:40.354929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.932 [2024-07-24 10:54:40.354944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.932 [2024-07-24 10:54:40.354951] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.932 [2024-07-24 10:54:40.354957] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:32.932 [2024-07-24 10:54:40.365136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:32.932 qpair failed and we were unable to recover it. 00:32:32.932 [2024-07-24 10:54:40.374756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:32.932 [2024-07-24 10:54:40.374794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:32.932 [2024-07-24 10:54:40.374808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:32.932 [2024-07-24 10:54:40.374814] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:32.932 [2024-07-24 10:54:40.374820] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.192 [2024-07-24 10:54:40.385230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.192 qpair failed and we were unable to recover it. 00:32:33.192 [2024-07-24 10:54:40.394782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.192 [2024-07-24 10:54:40.394818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.192 [2024-07-24 10:54:40.394832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.192 [2024-07-24 10:54:40.394839] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.192 [2024-07-24 10:54:40.394844] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.192 [2024-07-24 10:54:40.405053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.192 qpair failed and we were unable to recover it. 00:32:33.192 [2024-07-24 10:54:40.414876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.192 [2024-07-24 10:54:40.414919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.192 [2024-07-24 10:54:40.414937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.192 [2024-07-24 10:54:40.414944] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.192 [2024-07-24 10:54:40.414950] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.192 [2024-07-24 10:54:40.425376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.192 qpair failed and we were unable to recover it. 00:32:33.192 [2024-07-24 10:54:40.434919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.192 [2024-07-24 10:54:40.434958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.192 [2024-07-24 10:54:40.434972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.192 [2024-07-24 10:54:40.434979] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.192 [2024-07-24 10:54:40.434984] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.192 [2024-07-24 10:54:40.445258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.192 qpair failed and we were unable to recover it. 00:32:33.192 [2024-07-24 10:54:40.454978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.192 [2024-07-24 10:54:40.455013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.192 [2024-07-24 10:54:40.455027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.192 [2024-07-24 10:54:40.455033] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.192 [2024-07-24 10:54:40.455039] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.192 [2024-07-24 10:54:40.465386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.192 qpair failed and we were unable to recover it. 00:32:33.192 [2024-07-24 10:54:40.475066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.192 [2024-07-24 10:54:40.475105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.192 [2024-07-24 10:54:40.475119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.192 [2024-07-24 10:54:40.475125] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.192 [2024-07-24 10:54:40.475131] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.192 [2024-07-24 10:54:40.485398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.192 qpair failed and we were unable to recover it. 00:32:33.192 [2024-07-24 10:54:40.495163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.192 [2024-07-24 10:54:40.495201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.192 [2024-07-24 10:54:40.495215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.192 [2024-07-24 10:54:40.495221] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.192 [2024-07-24 10:54:40.495231] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.192 [2024-07-24 10:54:40.505743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.192 qpair failed and we were unable to recover it. 00:32:33.192 [2024-07-24 10:54:40.515225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.192 [2024-07-24 10:54:40.515262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.192 [2024-07-24 10:54:40.515276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.192 [2024-07-24 10:54:40.515283] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.192 [2024-07-24 10:54:40.515289] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.192 [2024-07-24 10:54:40.525627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.192 qpair failed and we were unable to recover it. 00:32:33.192 [2024-07-24 10:54:40.535143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.192 [2024-07-24 10:54:40.535176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.192 [2024-07-24 10:54:40.535189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.192 [2024-07-24 10:54:40.535196] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.192 [2024-07-24 10:54:40.535202] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.192 [2024-07-24 10:54:40.545749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.192 qpair failed and we were unable to recover it. 00:32:33.192 [2024-07-24 10:54:40.555258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.192 [2024-07-24 10:54:40.555294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.192 [2024-07-24 10:54:40.555308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.192 [2024-07-24 10:54:40.555315] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.192 [2024-07-24 10:54:40.555321] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.192 [2024-07-24 10:54:40.565796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.192 qpair failed and we were unable to recover it. 00:32:33.192 [2024-07-24 10:54:40.575377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.192 [2024-07-24 10:54:40.575420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.192 [2024-07-24 10:54:40.575434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.192 [2024-07-24 10:54:40.575441] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.192 [2024-07-24 10:54:40.575446] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.192 [2024-07-24 10:54:40.585975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.192 qpair failed and we were unable to recover it. 00:32:33.192 [2024-07-24 10:54:40.595362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.192 [2024-07-24 10:54:40.595399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.192 [2024-07-24 10:54:40.595413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.192 [2024-07-24 10:54:40.595419] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.193 [2024-07-24 10:54:40.595425] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.193 [2024-07-24 10:54:40.605900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.193 qpair failed and we were unable to recover it. 00:32:33.193 [2024-07-24 10:54:40.615358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.193 [2024-07-24 10:54:40.615394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.193 [2024-07-24 10:54:40.615408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.193 [2024-07-24 10:54:40.615415] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.193 [2024-07-24 10:54:40.615420] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.193 [2024-07-24 10:54:40.625745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.193 qpair failed and we were unable to recover it. 00:32:33.193 [2024-07-24 10:54:40.635512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.193 [2024-07-24 10:54:40.635550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.193 [2024-07-24 10:54:40.635565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.193 [2024-07-24 10:54:40.635571] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.193 [2024-07-24 10:54:40.635577] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.452 [2024-07-24 10:54:40.645929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-24 10:54:40.655619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.452 [2024-07-24 10:54:40.655654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.452 [2024-07-24 10:54:40.655668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.452 [2024-07-24 10:54:40.655674] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.452 [2024-07-24 10:54:40.655680] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.452 [2024-07-24 10:54:40.665844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-24 10:54:40.675518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.452 [2024-07-24 10:54:40.675557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.452 [2024-07-24 10:54:40.675576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.452 [2024-07-24 10:54:40.675582] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.452 [2024-07-24 10:54:40.675588] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.452 [2024-07-24 10:54:40.685982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-24 10:54:40.695738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.452 [2024-07-24 10:54:40.695775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.452 [2024-07-24 10:54:40.695789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.452 [2024-07-24 10:54:40.695796] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.452 [2024-07-24 10:54:40.695801] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.452 [2024-07-24 10:54:40.706043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-24 10:54:40.715733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.452 [2024-07-24 10:54:40.715770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.452 [2024-07-24 10:54:40.715785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.452 [2024-07-24 10:54:40.715791] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.452 [2024-07-24 10:54:40.715797] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.452 [2024-07-24 10:54:40.726179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-24 10:54:40.735756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.453 [2024-07-24 10:54:40.735797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.453 [2024-07-24 10:54:40.735811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.453 [2024-07-24 10:54:40.735817] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.453 [2024-07-24 10:54:40.735823] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.453 [2024-07-24 10:54:40.746026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-24 10:54:40.755783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.453 [2024-07-24 10:54:40.755820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.453 [2024-07-24 10:54:40.755834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.453 [2024-07-24 10:54:40.755841] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.453 [2024-07-24 10:54:40.755847] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.453 [2024-07-24 10:54:40.766290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-24 10:54:40.775843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.453 [2024-07-24 10:54:40.775882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.453 [2024-07-24 10:54:40.775896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.453 [2024-07-24 10:54:40.775903] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.453 [2024-07-24 10:54:40.775908] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.453 [2024-07-24 10:54:40.786278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-24 10:54:40.795940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.453 [2024-07-24 10:54:40.795978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.453 [2024-07-24 10:54:40.795992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.453 [2024-07-24 10:54:40.795998] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.453 [2024-07-24 10:54:40.796004] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.453 [2024-07-24 10:54:40.806201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-24 10:54:40.816055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.453 [2024-07-24 10:54:40.816100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.453 [2024-07-24 10:54:40.816114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.453 [2024-07-24 10:54:40.816121] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.453 [2024-07-24 10:54:40.816126] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.453 [2024-07-24 10:54:40.826351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-24 10:54:40.836035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.453 [2024-07-24 10:54:40.836076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.453 [2024-07-24 10:54:40.836091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.453 [2024-07-24 10:54:40.836097] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.453 [2024-07-24 10:54:40.836103] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.453 [2024-07-24 10:54:40.846272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-24 10:54:40.856176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.453 [2024-07-24 10:54:40.856208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.453 [2024-07-24 10:54:40.856225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.453 [2024-07-24 10:54:40.856232] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.453 [2024-07-24 10:54:40.856237] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.453 [2024-07-24 10:54:40.866460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-24 10:54:40.876149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.453 [2024-07-24 10:54:40.876187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.453 [2024-07-24 10:54:40.876201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.453 [2024-07-24 10:54:40.876207] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.453 [2024-07-24 10:54:40.876213] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.453 [2024-07-24 10:54:40.886504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-24 10:54:40.896267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.453 [2024-07-24 10:54:40.896310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.453 [2024-07-24 10:54:40.896324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.453 [2024-07-24 10:54:40.896331] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.453 [2024-07-24 10:54:40.896337] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.713 [2024-07-24 10:54:40.906615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.713 qpair failed and we were unable to recover it. 00:32:33.713 [2024-07-24 10:54:40.916268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.713 [2024-07-24 10:54:40.916301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.713 [2024-07-24 10:54:40.916315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.713 [2024-07-24 10:54:40.916322] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.713 [2024-07-24 10:54:40.916327] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.713 [2024-07-24 10:54:40.926704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.713 qpair failed and we were unable to recover it. 00:32:33.713 [2024-07-24 10:54:40.936378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.713 [2024-07-24 10:54:40.936419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.713 [2024-07-24 10:54:40.936434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.713 [2024-07-24 10:54:40.936440] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.713 [2024-07-24 10:54:40.936450] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.713 [2024-07-24 10:54:40.946774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.713 qpair failed and we were unable to recover it. 00:32:33.713 [2024-07-24 10:54:40.956410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.713 [2024-07-24 10:54:40.956447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.713 [2024-07-24 10:54:40.956461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.713 [2024-07-24 10:54:40.956467] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.713 [2024-07-24 10:54:40.956472] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.713 [2024-07-24 10:54:40.966765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.713 qpair failed and we were unable to recover it. 00:32:33.713 [2024-07-24 10:54:40.976396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.713 [2024-07-24 10:54:40.976437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.713 [2024-07-24 10:54:40.976451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.713 [2024-07-24 10:54:40.976457] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.713 [2024-07-24 10:54:40.976463] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.713 [2024-07-24 10:54:40.986718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.713 qpair failed and we were unable to recover it. 00:32:33.713 [2024-07-24 10:54:40.996548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.713 [2024-07-24 10:54:40.996588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.713 [2024-07-24 10:54:40.996602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.713 [2024-07-24 10:54:40.996608] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.713 [2024-07-24 10:54:40.996614] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.713 [2024-07-24 10:54:41.006835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.713 qpair failed and we were unable to recover it. 00:32:33.713 [2024-07-24 10:54:41.016619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.713 [2024-07-24 10:54:41.016660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.713 [2024-07-24 10:54:41.016675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.713 [2024-07-24 10:54:41.016682] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.713 [2024-07-24 10:54:41.016687] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.713 [2024-07-24 10:54:41.027017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.713 qpair failed and we were unable to recover it. 00:32:33.713 [2024-07-24 10:54:41.036748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.713 [2024-07-24 10:54:41.036787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.713 [2024-07-24 10:54:41.036800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.713 [2024-07-24 10:54:41.036807] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.713 [2024-07-24 10:54:41.036812] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.713 [2024-07-24 10:54:41.047122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.713 qpair failed and we were unable to recover it. 00:32:33.713 [2024-07-24 10:54:41.056666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.713 [2024-07-24 10:54:41.056707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.713 [2024-07-24 10:54:41.056721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.713 [2024-07-24 10:54:41.056727] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.713 [2024-07-24 10:54:41.056733] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.713 [2024-07-24 10:54:41.067108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.713 qpair failed and we were unable to recover it. 00:32:33.713 [2024-07-24 10:54:41.076764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.713 [2024-07-24 10:54:41.076799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.713 [2024-07-24 10:54:41.076813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.713 [2024-07-24 10:54:41.076819] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.713 [2024-07-24 10:54:41.076825] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.713 [2024-07-24 10:54:41.087277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.713 qpair failed and we were unable to recover it. 00:32:33.713 [2024-07-24 10:54:41.096932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.713 [2024-07-24 10:54:41.096970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.713 [2024-07-24 10:54:41.096984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.713 [2024-07-24 10:54:41.096991] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.713 [2024-07-24 10:54:41.096997] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.713 [2024-07-24 10:54:41.107231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.713 qpair failed and we were unable to recover it. 00:32:33.713 [2024-07-24 10:54:41.117024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.713 [2024-07-24 10:54:41.117064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.713 [2024-07-24 10:54:41.117084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.713 [2024-07-24 10:54:41.117090] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.713 [2024-07-24 10:54:41.117095] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.713 [2024-07-24 10:54:41.127210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.713 qpair failed and we were unable to recover it. 00:32:33.713 [2024-07-24 10:54:41.136985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.714 [2024-07-24 10:54:41.137027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.714 [2024-07-24 10:54:41.137042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.714 [2024-07-24 10:54:41.137048] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.714 [2024-07-24 10:54:41.137054] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.714 [2024-07-24 10:54:41.147372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.714 qpair failed and we were unable to recover it. 00:32:33.714 [2024-07-24 10:54:41.157055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.714 [2024-07-24 10:54:41.157094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.714 [2024-07-24 10:54:41.157108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.714 [2024-07-24 10:54:41.157114] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.714 [2024-07-24 10:54:41.157119] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.973 [2024-07-24 10:54:41.167176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.973 qpair failed and we were unable to recover it. 00:32:33.973 [2024-07-24 10:54:41.177132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.973 [2024-07-24 10:54:41.177171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.973 [2024-07-24 10:54:41.177185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.973 [2024-07-24 10:54:41.177191] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.973 [2024-07-24 10:54:41.177197] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.973 [2024-07-24 10:54:41.187445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.973 qpair failed and we were unable to recover it. 00:32:33.973 [2024-07-24 10:54:41.197157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.973 [2024-07-24 10:54:41.197194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.973 [2024-07-24 10:54:41.197208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.973 [2024-07-24 10:54:41.197215] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.973 [2024-07-24 10:54:41.197220] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.973 [2024-07-24 10:54:41.207476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.973 qpair failed and we were unable to recover it. 00:32:33.973 [2024-07-24 10:54:41.217279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.973 [2024-07-24 10:54:41.217319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.973 [2024-07-24 10:54:41.217332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.973 [2024-07-24 10:54:41.217339] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.973 [2024-07-24 10:54:41.217345] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.973 [2024-07-24 10:54:41.227499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.973 qpair failed and we were unable to recover it. 00:32:33.973 [2024-07-24 10:54:41.237395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.973 [2024-07-24 10:54:41.237431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.973 [2024-07-24 10:54:41.237444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.973 [2024-07-24 10:54:41.237451] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.973 [2024-07-24 10:54:41.237456] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.973 [2024-07-24 10:54:41.247741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.973 qpair failed and we were unable to recover it. 00:32:33.973 [2024-07-24 10:54:41.257405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.973 [2024-07-24 10:54:41.257446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.973 [2024-07-24 10:54:41.257461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.973 [2024-07-24 10:54:41.257468] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.973 [2024-07-24 10:54:41.257474] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.973 [2024-07-24 10:54:41.267764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.973 qpair failed and we were unable to recover it. 00:32:33.973 [2024-07-24 10:54:41.277399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.973 [2024-07-24 10:54:41.277437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.973 [2024-07-24 10:54:41.277452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.973 [2024-07-24 10:54:41.277458] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.973 [2024-07-24 10:54:41.277464] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.973 [2024-07-24 10:54:41.287711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.973 qpair failed and we were unable to recover it. 00:32:33.973 [2024-07-24 10:54:41.297454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.973 [2024-07-24 10:54:41.297497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.973 [2024-07-24 10:54:41.297515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.973 [2024-07-24 10:54:41.297522] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.973 [2024-07-24 10:54:41.297528] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.973 [2024-07-24 10:54:41.307720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.974 qpair failed and we were unable to recover it. 00:32:33.974 [2024-07-24 10:54:41.317668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.974 [2024-07-24 10:54:41.317710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.974 [2024-07-24 10:54:41.317725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.974 [2024-07-24 10:54:41.317732] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.974 [2024-07-24 10:54:41.317738] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.974 [2024-07-24 10:54:41.327847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.974 qpair failed and we were unable to recover it. 00:32:33.974 [2024-07-24 10:54:41.337589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.974 [2024-07-24 10:54:41.337629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.974 [2024-07-24 10:54:41.337643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.974 [2024-07-24 10:54:41.337649] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.974 [2024-07-24 10:54:41.337655] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.974 [2024-07-24 10:54:41.347920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.974 qpair failed and we were unable to recover it. 00:32:33.974 [2024-07-24 10:54:41.357650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.974 [2024-07-24 10:54:41.357688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.974 [2024-07-24 10:54:41.357703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.974 [2024-07-24 10:54:41.357710] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.974 [2024-07-24 10:54:41.357716] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.974 [2024-07-24 10:54:41.368085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.974 qpair failed and we were unable to recover it. 00:32:33.974 [2024-07-24 10:54:41.377721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.974 [2024-07-24 10:54:41.377759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.974 [2024-07-24 10:54:41.377774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.974 [2024-07-24 10:54:41.377780] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.974 [2024-07-24 10:54:41.377789] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.974 [2024-07-24 10:54:41.387946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.974 qpair failed and we were unable to recover it. 00:32:33.974 [2024-07-24 10:54:41.397804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.974 [2024-07-24 10:54:41.397842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.974 [2024-07-24 10:54:41.397856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.974 [2024-07-24 10:54:41.397862] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.974 [2024-07-24 10:54:41.397868] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:33.974 [2024-07-24 10:54:41.408077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.974 qpair failed and we were unable to recover it. 00:32:33.974 [2024-07-24 10:54:41.417876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:33.974 [2024-07-24 10:54:41.417916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:33.974 [2024-07-24 10:54:41.417931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:33.974 [2024-07-24 10:54:41.417937] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:33.974 [2024-07-24 10:54:41.417943] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.233 [2024-07-24 10:54:41.428001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.233 qpair failed and we were unable to recover it. 00:32:34.233 [2024-07-24 10:54:41.437847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.233 [2024-07-24 10:54:41.437882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.233 [2024-07-24 10:54:41.437896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.233 [2024-07-24 10:54:41.437902] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.233 [2024-07-24 10:54:41.437908] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.233 [2024-07-24 10:54:41.448263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.233 qpair failed and we were unable to recover it. 00:32:34.233 [2024-07-24 10:54:41.457987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.233 [2024-07-24 10:54:41.458026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.233 [2024-07-24 10:54:41.458040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.233 [2024-07-24 10:54:41.458046] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.233 [2024-07-24 10:54:41.458052] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.233 [2024-07-24 10:54:41.468265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.233 qpair failed and we were unable to recover it. 00:32:34.233 [2024-07-24 10:54:41.478271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.233 [2024-07-24 10:54:41.478308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.233 [2024-07-24 10:54:41.478322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.233 [2024-07-24 10:54:41.478329] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.233 [2024-07-24 10:54:41.478334] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.233 [2024-07-24 10:54:41.488459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.233 qpair failed and we were unable to recover it. 00:32:34.233 [2024-07-24 10:54:41.498091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.233 [2024-07-24 10:54:41.498130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.233 [2024-07-24 10:54:41.498144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.233 [2024-07-24 10:54:41.498150] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.233 [2024-07-24 10:54:41.498156] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.233 [2024-07-24 10:54:41.508376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.233 qpair failed and we were unable to recover it. 00:32:34.233 [2024-07-24 10:54:41.518190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.233 [2024-07-24 10:54:41.518229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.233 [2024-07-24 10:54:41.518243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.233 [2024-07-24 10:54:41.518250] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.233 [2024-07-24 10:54:41.518256] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.233 [2024-07-24 10:54:41.528406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.233 qpair failed and we were unable to recover it. 00:32:34.233 [2024-07-24 10:54:41.538191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.233 [2024-07-24 10:54:41.538232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.233 [2024-07-24 10:54:41.538246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.233 [2024-07-24 10:54:41.538253] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.234 [2024-07-24 10:54:41.538259] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.234 [2024-07-24 10:54:41.548400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.234 qpair failed and we were unable to recover it. 00:32:34.234 [2024-07-24 10:54:41.558216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.234 [2024-07-24 10:54:41.558255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.234 [2024-07-24 10:54:41.558272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.234 [2024-07-24 10:54:41.558278] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.234 [2024-07-24 10:54:41.558284] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.234 [2024-07-24 10:54:41.568497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.234 qpair failed and we were unable to recover it. 00:32:34.234 [2024-07-24 10:54:41.578311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.234 [2024-07-24 10:54:41.578349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.234 [2024-07-24 10:54:41.578363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.234 [2024-07-24 10:54:41.578370] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.234 [2024-07-24 10:54:41.578376] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.234 [2024-07-24 10:54:41.588651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.234 qpair failed and we were unable to recover it. 00:32:34.234 [2024-07-24 10:54:41.598419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.234 [2024-07-24 10:54:41.598457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.234 [2024-07-24 10:54:41.598470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.234 [2024-07-24 10:54:41.598477] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.234 [2024-07-24 10:54:41.598483] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.234 [2024-07-24 10:54:41.608725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.234 qpair failed and we were unable to recover it. 00:32:34.234 [2024-07-24 10:54:41.618350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.234 [2024-07-24 10:54:41.618386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.234 [2024-07-24 10:54:41.618400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.234 [2024-07-24 10:54:41.618407] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.234 [2024-07-24 10:54:41.618412] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.234 [2024-07-24 10:54:41.628724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.234 qpair failed and we were unable to recover it. 00:32:34.234 [2024-07-24 10:54:41.638395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.234 [2024-07-24 10:54:41.638437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.234 [2024-07-24 10:54:41.638451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.234 [2024-07-24 10:54:41.638457] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.234 [2024-07-24 10:54:41.638463] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.234 [2024-07-24 10:54:41.648787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.234 qpair failed and we were unable to recover it. 00:32:34.234 [2024-07-24 10:54:41.658504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.234 [2024-07-24 10:54:41.658539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.234 [2024-07-24 10:54:41.658553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.234 [2024-07-24 10:54:41.658560] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.234 [2024-07-24 10:54:41.658566] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.234 [2024-07-24 10:54:41.668876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.234 qpair failed and we were unable to recover it. 00:32:34.234 [2024-07-24 10:54:41.678660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.234 [2024-07-24 10:54:41.678698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.234 [2024-07-24 10:54:41.678711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.234 [2024-07-24 10:54:41.678718] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.234 [2024-07-24 10:54:41.678724] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.493 [2024-07-24 10:54:41.688857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.493 qpair failed and we were unable to recover it. 00:32:34.493 [2024-07-24 10:54:41.698566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.494 [2024-07-24 10:54:41.698609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.494 [2024-07-24 10:54:41.698623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.494 [2024-07-24 10:54:41.698629] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.494 [2024-07-24 10:54:41.698635] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.494 [2024-07-24 10:54:41.708933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.494 qpair failed and we were unable to recover it. 00:32:34.494 [2024-07-24 10:54:41.718676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.494 [2024-07-24 10:54:41.718714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.494 [2024-07-24 10:54:41.718728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.494 [2024-07-24 10:54:41.718735] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.494 [2024-07-24 10:54:41.718740] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.494 [2024-07-24 10:54:41.729067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.494 qpair failed and we were unable to recover it. 00:32:34.494 [2024-07-24 10:54:41.738828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.494 [2024-07-24 10:54:41.738867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.494 [2024-07-24 10:54:41.738885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.494 [2024-07-24 10:54:41.738891] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.494 [2024-07-24 10:54:41.738897] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.494 [2024-07-24 10:54:41.749225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.494 qpair failed and we were unable to recover it. 00:32:34.494 [2024-07-24 10:54:41.758904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.494 [2024-07-24 10:54:41.758940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.494 [2024-07-24 10:54:41.758954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.494 [2024-07-24 10:54:41.758960] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.494 [2024-07-24 10:54:41.758966] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.494 [2024-07-24 10:54:41.769207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.494 qpair failed and we were unable to recover it. 00:32:34.494 [2024-07-24 10:54:41.778831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.494 [2024-07-24 10:54:41.778868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.494 [2024-07-24 10:54:41.778882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.494 [2024-07-24 10:54:41.778889] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.494 [2024-07-24 10:54:41.778894] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.494 [2024-07-24 10:54:41.789109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.494 qpair failed and we were unable to recover it. 00:32:34.494 [2024-07-24 10:54:41.798944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.494 [2024-07-24 10:54:41.798988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.494 [2024-07-24 10:54:41.799001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.494 [2024-07-24 10:54:41.799008] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.494 [2024-07-24 10:54:41.799014] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.494 [2024-07-24 10:54:41.809363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.494 qpair failed and we were unable to recover it. 00:32:34.494 [2024-07-24 10:54:41.818941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.494 [2024-07-24 10:54:41.818980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.494 [2024-07-24 10:54:41.818994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.494 [2024-07-24 10:54:41.819001] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.494 [2024-07-24 10:54:41.819009] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.494 [2024-07-24 10:54:41.829459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.494 qpair failed and we were unable to recover it. 00:32:34.494 [2024-07-24 10:54:41.839000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.494 [2024-07-24 10:54:41.839038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.494 [2024-07-24 10:54:41.839053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.494 [2024-07-24 10:54:41.839059] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.494 [2024-07-24 10:54:41.839065] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.494 [2024-07-24 10:54:41.849498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.494 qpair failed and we were unable to recover it. 00:32:34.494 [2024-07-24 10:54:41.859158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.494 [2024-07-24 10:54:41.859197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.494 [2024-07-24 10:54:41.859212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.494 [2024-07-24 10:54:41.859218] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.494 [2024-07-24 10:54:41.859224] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.494 [2024-07-24 10:54:41.869698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.494 qpair failed and we were unable to recover it. 00:32:34.494 [2024-07-24 10:54:41.879228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.494 [2024-07-24 10:54:41.879269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.494 [2024-07-24 10:54:41.879283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.494 [2024-07-24 10:54:41.879290] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.494 [2024-07-24 10:54:41.879296] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.494 [2024-07-24 10:54:41.889746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.494 qpair failed and we were unable to recover it. 00:32:34.494 [2024-07-24 10:54:41.899289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.494 [2024-07-24 10:54:41.899329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.494 [2024-07-24 10:54:41.899343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.494 [2024-07-24 10:54:41.899349] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.494 [2024-07-24 10:54:41.899355] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.494 [2024-07-24 10:54:41.909884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.494 qpair failed and we were unable to recover it. 00:32:34.494 [2024-07-24 10:54:41.919376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.494 [2024-07-24 10:54:41.919416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.494 [2024-07-24 10:54:41.919431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.494 [2024-07-24 10:54:41.919438] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.495 [2024-07-24 10:54:41.919443] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.495 [2024-07-24 10:54:41.929734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.495 qpair failed and we were unable to recover it. 00:32:34.495 [2024-07-24 10:54:41.939374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.495 [2024-07-24 10:54:41.939410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.495 [2024-07-24 10:54:41.939424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.495 [2024-07-24 10:54:41.939430] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.495 [2024-07-24 10:54:41.939436] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.753 [2024-07-24 10:54:41.949824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.753 qpair failed and we were unable to recover it. 00:32:34.753 [2024-07-24 10:54:41.959468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.753 [2024-07-24 10:54:41.959506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.753 [2024-07-24 10:54:41.959520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.753 [2024-07-24 10:54:41.959526] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.753 [2024-07-24 10:54:41.959532] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.753 [2024-07-24 10:54:41.969949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.753 qpair failed and we were unable to recover it. 00:32:34.753 [2024-07-24 10:54:41.979603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.753 [2024-07-24 10:54:41.979643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.753 [2024-07-24 10:54:41.979657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.753 [2024-07-24 10:54:41.979663] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.753 [2024-07-24 10:54:41.979669] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.753 [2024-07-24 10:54:41.989879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.753 qpair failed and we were unable to recover it. 00:32:34.753 [2024-07-24 10:54:41.999648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.753 [2024-07-24 10:54:41.999684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.753 [2024-07-24 10:54:41.999702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.753 [2024-07-24 10:54:41.999708] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.753 [2024-07-24 10:54:41.999714] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.753 [2024-07-24 10:54:42.009908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.753 qpair failed and we were unable to recover it. 00:32:34.753 [2024-07-24 10:54:42.019600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.753 [2024-07-24 10:54:42.019645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.753 [2024-07-24 10:54:42.019659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.753 [2024-07-24 10:54:42.019665] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.753 [2024-07-24 10:54:42.019671] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.753 [2024-07-24 10:54:42.030080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.753 qpair failed and we were unable to recover it. 00:32:34.753 [2024-07-24 10:54:42.039799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.753 [2024-07-24 10:54:42.039832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.753 [2024-07-24 10:54:42.039846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.753 [2024-07-24 10:54:42.039852] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.753 [2024-07-24 10:54:42.039858] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.753 [2024-07-24 10:54:42.050196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.753 qpair failed and we were unable to recover it. 00:32:34.753 [2024-07-24 10:54:42.059814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.753 [2024-07-24 10:54:42.059856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.753 [2024-07-24 10:54:42.059870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.753 [2024-07-24 10:54:42.059877] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.753 [2024-07-24 10:54:42.059883] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.753 [2024-07-24 10:54:42.070385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.753 qpair failed and we were unable to recover it. 00:32:34.753 [2024-07-24 10:54:42.079824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.753 [2024-07-24 10:54:42.079863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.753 [2024-07-24 10:54:42.079877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.753 [2024-07-24 10:54:42.079883] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.753 [2024-07-24 10:54:42.079889] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.753 [2024-07-24 10:54:42.090318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.753 qpair failed and we were unable to recover it. 00:32:34.753 [2024-07-24 10:54:42.099889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.753 [2024-07-24 10:54:42.099927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.753 [2024-07-24 10:54:42.099941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.753 [2024-07-24 10:54:42.099947] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.753 [2024-07-24 10:54:42.099953] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.754 [2024-07-24 10:54:42.110378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.754 qpair failed and we were unable to recover it. 00:32:34.754 [2024-07-24 10:54:42.120024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.754 [2024-07-24 10:54:42.120059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.754 [2024-07-24 10:54:42.120073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.754 [2024-07-24 10:54:42.120080] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.754 [2024-07-24 10:54:42.120085] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.754 [2024-07-24 10:54:42.130280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.754 qpair failed and we were unable to recover it. 00:32:34.754 [2024-07-24 10:54:42.140036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.754 [2024-07-24 10:54:42.140075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.754 [2024-07-24 10:54:42.140089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.754 [2024-07-24 10:54:42.140096] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.754 [2024-07-24 10:54:42.140101] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.754 [2024-07-24 10:54:42.150534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.754 qpair failed and we were unable to recover it. 00:32:34.754 [2024-07-24 10:54:42.160194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.754 [2024-07-24 10:54:42.160231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.754 [2024-07-24 10:54:42.160245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.754 [2024-07-24 10:54:42.160251] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.754 [2024-07-24 10:54:42.160256] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.754 [2024-07-24 10:54:42.170496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.754 qpair failed and we were unable to recover it. 00:32:34.754 [2024-07-24 10:54:42.180119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.754 [2024-07-24 10:54:42.180155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.754 [2024-07-24 10:54:42.180171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.754 [2024-07-24 10:54:42.180178] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.754 [2024-07-24 10:54:42.180184] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:34.754 [2024-07-24 10:54:42.190517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.754 qpair failed and we were unable to recover it. 00:32:34.754 [2024-07-24 10:54:42.200227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.754 [2024-07-24 10:54:42.200264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.754 [2024-07-24 10:54:42.200278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.754 [2024-07-24 10:54:42.200284] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.754 [2024-07-24 10:54:42.200290] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.013 [2024-07-24 10:54:42.210650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.013 qpair failed and we were unable to recover it. 00:32:35.013 [2024-07-24 10:54:42.220277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.013 [2024-07-24 10:54:42.220311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.013 [2024-07-24 10:54:42.220325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.013 [2024-07-24 10:54:42.220331] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.013 [2024-07-24 10:54:42.220337] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.013 [2024-07-24 10:54:42.230738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.013 qpair failed and we were unable to recover it. 00:32:35.014 [2024-07-24 10:54:42.240364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.014 [2024-07-24 10:54:42.240401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.014 [2024-07-24 10:54:42.240414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.014 [2024-07-24 10:54:42.240421] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.014 [2024-07-24 10:54:42.240426] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.014 [2024-07-24 10:54:42.250894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.014 qpair failed and we were unable to recover it. 00:32:35.014 [2024-07-24 10:54:42.260341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.014 [2024-07-24 10:54:42.260379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.014 [2024-07-24 10:54:42.260393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.014 [2024-07-24 10:54:42.260399] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.014 [2024-07-24 10:54:42.260407] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.014 [2024-07-24 10:54:42.270733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.014 qpair failed and we were unable to recover it. 00:32:35.014 [2024-07-24 10:54:42.280392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.014 [2024-07-24 10:54:42.280426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.014 [2024-07-24 10:54:42.280441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.014 [2024-07-24 10:54:42.280447] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.014 [2024-07-24 10:54:42.280453] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.014 [2024-07-24 10:54:42.290711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.014 qpair failed and we were unable to recover it. 00:32:35.014 [2024-07-24 10:54:42.300481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.014 [2024-07-24 10:54:42.300527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.014 [2024-07-24 10:54:42.300541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.014 [2024-07-24 10:54:42.300548] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.014 [2024-07-24 10:54:42.300553] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.014 [2024-07-24 10:54:42.311028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.014 qpair failed and we were unable to recover it. 00:32:35.014 [2024-07-24 10:54:42.320621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.014 [2024-07-24 10:54:42.320663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.014 [2024-07-24 10:54:42.320678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.014 [2024-07-24 10:54:42.320684] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.014 [2024-07-24 10:54:42.320690] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.014 [2024-07-24 10:54:42.330993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.014 qpair failed and we were unable to recover it. 00:32:35.014 [2024-07-24 10:54:42.340795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.014 [2024-07-24 10:54:42.340835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.014 [2024-07-24 10:54:42.340850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.014 [2024-07-24 10:54:42.340856] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.014 [2024-07-24 10:54:42.340862] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.014 [2024-07-24 10:54:42.351184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.014 qpair failed and we were unable to recover it. 00:32:35.014 [2024-07-24 10:54:42.360825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.014 [2024-07-24 10:54:42.360864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.014 [2024-07-24 10:54:42.360879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.014 [2024-07-24 10:54:42.360885] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.014 [2024-07-24 10:54:42.360891] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.014 [2024-07-24 10:54:42.371003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.014 qpair failed and we were unable to recover it. 00:32:35.014 [2024-07-24 10:54:42.380692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.014 [2024-07-24 10:54:42.380731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.014 [2024-07-24 10:54:42.380745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.014 [2024-07-24 10:54:42.380751] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.014 [2024-07-24 10:54:42.380757] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.014 [2024-07-24 10:54:42.391376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.014 qpair failed and we were unable to recover it. 00:32:35.014 [2024-07-24 10:54:42.400872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.014 [2024-07-24 10:54:42.400910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.014 [2024-07-24 10:54:42.400924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.014 [2024-07-24 10:54:42.400930] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.014 [2024-07-24 10:54:42.400936] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.014 [2024-07-24 10:54:42.411286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.014 qpair failed and we were unable to recover it. 00:32:35.014 [2024-07-24 10:54:42.420806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.014 [2024-07-24 10:54:42.420844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.014 [2024-07-24 10:54:42.420860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.014 [2024-07-24 10:54:42.420866] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.014 [2024-07-24 10:54:42.420872] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.014 [2024-07-24 10:54:42.431394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.014 qpair failed and we were unable to recover it. 00:32:35.014 [2024-07-24 10:54:42.440965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.014 [2024-07-24 10:54:42.441006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.014 [2024-07-24 10:54:42.441026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.014 [2024-07-24 10:54:42.441032] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.014 [2024-07-24 10:54:42.441037] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.014 [2024-07-24 10:54:42.451544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.014 qpair failed and we were unable to recover it. 00:32:35.014 [2024-07-24 10:54:42.461028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.014 [2024-07-24 10:54:42.461062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.014 [2024-07-24 10:54:42.461075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.014 [2024-07-24 10:54:42.461082] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.014 [2024-07-24 10:54:42.461088] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.274 [2024-07-24 10:54:42.471382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.274 qpair failed and we were unable to recover it. 00:32:35.274 [2024-07-24 10:54:42.481068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.274 [2024-07-24 10:54:42.481109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.274 [2024-07-24 10:54:42.481122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.274 [2024-07-24 10:54:42.481129] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.274 [2024-07-24 10:54:42.481134] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.274 [2024-07-24 10:54:42.491488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.274 qpair failed and we were unable to recover it. 00:32:35.274 [2024-07-24 10:54:42.501179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.274 [2024-07-24 10:54:42.501220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.274 [2024-07-24 10:54:42.501233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.274 [2024-07-24 10:54:42.501240] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.274 [2024-07-24 10:54:42.501245] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.274 [2024-07-24 10:54:42.511518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.274 qpair failed and we were unable to recover it. 00:32:35.274 [2024-07-24 10:54:42.521199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.274 [2024-07-24 10:54:42.521231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.274 [2024-07-24 10:54:42.521245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.274 [2024-07-24 10:54:42.521252] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.274 [2024-07-24 10:54:42.521258] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.274 [2024-07-24 10:54:42.531361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.274 qpair failed and we were unable to recover it. 00:32:35.274 [2024-07-24 10:54:42.541221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.274 [2024-07-24 10:54:42.541256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.274 [2024-07-24 10:54:42.541271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.274 [2024-07-24 10:54:42.541277] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.274 [2024-07-24 10:54:42.541283] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.274 [2024-07-24 10:54:42.551696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.274 qpair failed and we were unable to recover it. 00:32:35.274 [2024-07-24 10:54:42.561225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.274 [2024-07-24 10:54:42.561263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.274 [2024-07-24 10:54:42.561277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.274 [2024-07-24 10:54:42.561284] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.274 [2024-07-24 10:54:42.561290] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.274 [2024-07-24 10:54:42.571767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.274 qpair failed and we were unable to recover it. 00:32:35.274 [2024-07-24 10:54:42.581332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.274 [2024-07-24 10:54:42.581374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.274 [2024-07-24 10:54:42.581388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.274 [2024-07-24 10:54:42.581394] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.274 [2024-07-24 10:54:42.581400] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.274 [2024-07-24 10:54:42.591750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.274 qpair failed and we were unable to recover it. 00:32:35.275 [2024-07-24 10:54:42.601423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.275 [2024-07-24 10:54:42.601463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.275 [2024-07-24 10:54:42.601478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.275 [2024-07-24 10:54:42.601485] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.275 [2024-07-24 10:54:42.601497] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.275 [2024-07-24 10:54:42.611715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.275 qpair failed and we were unable to recover it. 00:32:35.275 [2024-07-24 10:54:42.621435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.275 [2024-07-24 10:54:42.621474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.275 [2024-07-24 10:54:42.621495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.275 [2024-07-24 10:54:42.621503] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.275 [2024-07-24 10:54:42.621509] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.275 [2024-07-24 10:54:42.631965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.275 qpair failed and we were unable to recover it. 00:32:35.275 [2024-07-24 10:54:42.641483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.275 [2024-07-24 10:54:42.641526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.275 [2024-07-24 10:54:42.641541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.275 [2024-07-24 10:54:42.641547] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.275 [2024-07-24 10:54:42.641553] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.275 [2024-07-24 10:54:42.652023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.275 qpair failed and we were unable to recover it. 00:32:35.275 [2024-07-24 10:54:42.661478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.275 [2024-07-24 10:54:42.661525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.275 [2024-07-24 10:54:42.661539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.275 [2024-07-24 10:54:42.661546] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.275 [2024-07-24 10:54:42.661551] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.275 [2024-07-24 10:54:42.672015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.275 qpair failed and we were unable to recover it. 00:32:35.275 [2024-07-24 10:54:42.681605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.275 [2024-07-24 10:54:42.681641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.275 [2024-07-24 10:54:42.681654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.275 [2024-07-24 10:54:42.681660] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.275 [2024-07-24 10:54:42.681666] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.275 [2024-07-24 10:54:42.692098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.275 qpair failed and we were unable to recover it. 00:32:35.275 [2024-07-24 10:54:42.701865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.275 [2024-07-24 10:54:42.701906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.275 [2024-07-24 10:54:42.701919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.275 [2024-07-24 10:54:42.701925] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.275 [2024-07-24 10:54:42.701935] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.275 [2024-07-24 10:54:42.712269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.275 qpair failed and we were unable to recover it. 00:32:35.275 [2024-07-24 10:54:42.721804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.275 [2024-07-24 10:54:42.721846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.275 [2024-07-24 10:54:42.721860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.275 [2024-07-24 10:54:42.721867] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.275 [2024-07-24 10:54:42.721873] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.535 [2024-07-24 10:54:42.732277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-07-24 10:54:42.741851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.535 [2024-07-24 10:54:42.741891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.535 [2024-07-24 10:54:42.741906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.535 [2024-07-24 10:54:42.741913] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.535 [2024-07-24 10:54:42.741918] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.535 [2024-07-24 10:54:42.752205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-07-24 10:54:42.761885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.535 [2024-07-24 10:54:42.761920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.535 [2024-07-24 10:54:42.761935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.535 [2024-07-24 10:54:42.761941] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.535 [2024-07-24 10:54:42.761947] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.535 [2024-07-24 10:54:42.772187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-07-24 10:54:42.782055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.535 [2024-07-24 10:54:42.782091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.535 [2024-07-24 10:54:42.782105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.535 [2024-07-24 10:54:42.782112] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.535 [2024-07-24 10:54:42.782118] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.535 [2024-07-24 10:54:42.792182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-07-24 10:54:42.802035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.535 [2024-07-24 10:54:42.802073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.535 [2024-07-24 10:54:42.802087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.535 [2024-07-24 10:54:42.802093] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.535 [2024-07-24 10:54:42.802099] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.535 [2024-07-24 10:54:42.812535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-07-24 10:54:42.822099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.535 [2024-07-24 10:54:42.822136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.535 [2024-07-24 10:54:42.822150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.535 [2024-07-24 10:54:42.822157] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.535 [2024-07-24 10:54:42.822163] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.535 [2024-07-24 10:54:42.832576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-07-24 10:54:42.842166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.535 [2024-07-24 10:54:42.842204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.535 [2024-07-24 10:54:42.842218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.535 [2024-07-24 10:54:42.842224] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.535 [2024-07-24 10:54:42.842229] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.535 [2024-07-24 10:54:42.852726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-07-24 10:54:42.862229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.535 [2024-07-24 10:54:42.862265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.535 [2024-07-24 10:54:42.862279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.535 [2024-07-24 10:54:42.862286] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.535 [2024-07-24 10:54:42.862292] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.535 [2024-07-24 10:54:42.872289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-07-24 10:54:42.882110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.535 [2024-07-24 10:54:42.882149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.535 [2024-07-24 10:54:42.882166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.535 [2024-07-24 10:54:42.882173] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.535 [2024-07-24 10:54:42.882178] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.535 [2024-07-24 10:54:42.892591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-07-24 10:54:42.902359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.536 [2024-07-24 10:54:42.902393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.536 [2024-07-24 10:54:42.902407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.536 [2024-07-24 10:54:42.902414] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.536 [2024-07-24 10:54:42.902419] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.536 [2024-07-24 10:54:42.912786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-07-24 10:54:42.922351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.536 [2024-07-24 10:54:42.922388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.536 [2024-07-24 10:54:42.922402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.536 [2024-07-24 10:54:42.922408] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.536 [2024-07-24 10:54:42.922414] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.536 [2024-07-24 10:54:42.932598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-07-24 10:54:42.942419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.536 [2024-07-24 10:54:42.942458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.536 [2024-07-24 10:54:42.942471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.536 [2024-07-24 10:54:42.942478] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.536 [2024-07-24 10:54:42.942484] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.536 [2024-07-24 10:54:42.952852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-07-24 10:54:42.962533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.536 [2024-07-24 10:54:42.962573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.536 [2024-07-24 10:54:42.962588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.536 [2024-07-24 10:54:42.962594] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.536 [2024-07-24 10:54:42.962600] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.536 [2024-07-24 10:54:42.972687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-07-24 10:54:42.982562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.536 [2024-07-24 10:54:42.982600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.536 [2024-07-24 10:54:42.982614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.536 [2024-07-24 10:54:42.982621] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.536 [2024-07-24 10:54:42.982627] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.847 [2024-07-24 10:54:42.992782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.847 qpair failed and we were unable to recover it. 00:32:35.847 [2024-07-24 10:54:43.002541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.847 [2024-07-24 10:54:43.002579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.847 [2024-07-24 10:54:43.002597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.847 [2024-07-24 10:54:43.002604] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.847 [2024-07-24 10:54:43.002610] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.847 [2024-07-24 10:54:43.012769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.847 qpair failed and we were unable to recover it. 00:32:35.847 [2024-07-24 10:54:43.022610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.847 [2024-07-24 10:54:43.022647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.847 [2024-07-24 10:54:43.022662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.847 [2024-07-24 10:54:43.022669] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.847 [2024-07-24 10:54:43.022675] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.847 [2024-07-24 10:54:43.032972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.847 qpair failed and we were unable to recover it. 00:32:35.847 [2024-07-24 10:54:43.042721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.847 [2024-07-24 10:54:43.042760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.847 [2024-07-24 10:54:43.042774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.847 [2024-07-24 10:54:43.042780] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.847 [2024-07-24 10:54:43.042786] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.847 [2024-07-24 10:54:43.053078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.847 qpair failed and we were unable to recover it. 00:32:35.847 [2024-07-24 10:54:43.062764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.847 [2024-07-24 10:54:43.062808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.847 [2024-07-24 10:54:43.062825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.847 [2024-07-24 10:54:43.062831] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.847 [2024-07-24 10:54:43.062837] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.847 [2024-07-24 10:54:43.072953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.847 qpair failed and we were unable to recover it. 00:32:35.847 [2024-07-24 10:54:43.082818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.847 [2024-07-24 10:54:43.082855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.847 [2024-07-24 10:54:43.082869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.847 [2024-07-24 10:54:43.082876] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.847 [2024-07-24 10:54:43.082882] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.847 [2024-07-24 10:54:43.093182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.847 qpair failed and we were unable to recover it. 00:32:35.847 [2024-07-24 10:54:43.102824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.847 [2024-07-24 10:54:43.102863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.847 [2024-07-24 10:54:43.102876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.847 [2024-07-24 10:54:43.102883] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.847 [2024-07-24 10:54:43.102888] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.847 [2024-07-24 10:54:43.113211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.847 qpair failed and we were unable to recover it. 00:32:35.847 [2024-07-24 10:54:43.122920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.848 [2024-07-24 10:54:43.122956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.848 [2024-07-24 10:54:43.122971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.848 [2024-07-24 10:54:43.122977] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.848 [2024-07-24 10:54:43.122983] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.848 [2024-07-24 10:54:43.133256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.848 qpair failed and we were unable to recover it. 00:32:35.848 [2024-07-24 10:54:43.142949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.848 [2024-07-24 10:54:43.142989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.848 [2024-07-24 10:54:43.143003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.848 [2024-07-24 10:54:43.143009] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.848 [2024-07-24 10:54:43.143019] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.848 [2024-07-24 10:54:43.153203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.848 qpair failed and we were unable to recover it. 00:32:35.848 [2024-07-24 10:54:43.163083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.848 [2024-07-24 10:54:43.163119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.848 [2024-07-24 10:54:43.163134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.848 [2024-07-24 10:54:43.163141] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.848 [2024-07-24 10:54:43.163146] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.848 [2024-07-24 10:54:43.173356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.848 qpair failed and we were unable to recover it. 00:32:35.848 [2024-07-24 10:54:43.183152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.848 [2024-07-24 10:54:43.183186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.848 [2024-07-24 10:54:43.183200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.848 [2024-07-24 10:54:43.183206] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.848 [2024-07-24 10:54:43.183212] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.848 [2024-07-24 10:54:43.193432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.848 qpair failed and we were unable to recover it. 00:32:35.848 [2024-07-24 10:54:43.203050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.848 [2024-07-24 10:54:43.203087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.848 [2024-07-24 10:54:43.203101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.848 [2024-07-24 10:54:43.203107] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.848 [2024-07-24 10:54:43.203113] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.848 [2024-07-24 10:54:43.213563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.848 qpair failed and we were unable to recover it. 00:32:35.848 [2024-07-24 10:54:43.223102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.848 [2024-07-24 10:54:43.223141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.848 [2024-07-24 10:54:43.223156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.848 [2024-07-24 10:54:43.223162] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.848 [2024-07-24 10:54:43.223168] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.848 [2024-07-24 10:54:43.233655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.848 qpair failed and we were unable to recover it. 00:32:35.848 [2024-07-24 10:54:43.243213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.848 [2024-07-24 10:54:43.243253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.848 [2024-07-24 10:54:43.243268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.848 [2024-07-24 10:54:43.243275] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.848 [2024-07-24 10:54:43.243281] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.848 [2024-07-24 10:54:43.253701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.848 qpair failed and we were unable to recover it. 00:32:35.848 [2024-07-24 10:54:43.263350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.848 [2024-07-24 10:54:43.263389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.848 [2024-07-24 10:54:43.263405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.848 [2024-07-24 10:54:43.263411] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.848 [2024-07-24 10:54:43.263417] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:35.848 [2024-07-24 10:54:43.273757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:35.848 qpair failed and we were unable to recover it. 00:32:36.116 [2024-07-24 10:54:43.283438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.116 [2024-07-24 10:54:43.283477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.116 [2024-07-24 10:54:43.283496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.116 [2024-07-24 10:54:43.283503] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.116 [2024-07-24 10:54:43.283510] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.116 [2024-07-24 10:54:43.293779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.116 qpair failed and we were unable to recover it. 00:32:36.116 [2024-07-24 10:54:43.303482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.116 [2024-07-24 10:54:43.303527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.116 [2024-07-24 10:54:43.303542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.116 [2024-07-24 10:54:43.303548] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.116 [2024-07-24 10:54:43.303554] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.116 [2024-07-24 10:54:43.313729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.116 qpair failed and we were unable to recover it. 00:32:36.116 [2024-07-24 10:54:43.323325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.116 [2024-07-24 10:54:43.323365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.116 [2024-07-24 10:54:43.323382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.116 [2024-07-24 10:54:43.323389] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.116 [2024-07-24 10:54:43.323394] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.116 [2024-07-24 10:54:43.333808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.116 qpair failed and we were unable to recover it. 00:32:36.116 [2024-07-24 10:54:43.343545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.116 [2024-07-24 10:54:43.343581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.116 [2024-07-24 10:54:43.343596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.116 [2024-07-24 10:54:43.343602] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.116 [2024-07-24 10:54:43.343608] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.116 [2024-07-24 10:54:43.353993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.116 qpair failed and we were unable to recover it. 00:32:36.116 [2024-07-24 10:54:43.363639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.116 [2024-07-24 10:54:43.363678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.116 [2024-07-24 10:54:43.363692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.116 [2024-07-24 10:54:43.363699] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.116 [2024-07-24 10:54:43.363705] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.116 [2024-07-24 10:54:43.373858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.116 qpair failed and we were unable to recover it. 00:32:36.116 [2024-07-24 10:54:43.383633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.116 [2024-07-24 10:54:43.383673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.116 [2024-07-24 10:54:43.383688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.116 [2024-07-24 10:54:43.383694] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.116 [2024-07-24 10:54:43.383700] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.116 [2024-07-24 10:54:43.394005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.116 qpair failed and we were unable to recover it. 00:32:36.116 [2024-07-24 10:54:43.403682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.116 [2024-07-24 10:54:43.403718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.116 [2024-07-24 10:54:43.403732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.116 [2024-07-24 10:54:43.403739] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.116 [2024-07-24 10:54:43.403744] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.116 [2024-07-24 10:54:43.414246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.116 qpair failed and we were unable to recover it. 00:32:36.116 [2024-07-24 10:54:43.423768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.116 [2024-07-24 10:54:43.423800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.116 [2024-07-24 10:54:43.423814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.116 [2024-07-24 10:54:43.423821] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.116 [2024-07-24 10:54:43.423826] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.116 [2024-07-24 10:54:43.434232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.116 qpair failed and we were unable to recover it. 00:32:36.116 [2024-07-24 10:54:43.444033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.116 [2024-07-24 10:54:43.444071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.116 [2024-07-24 10:54:43.444084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.116 [2024-07-24 10:54:43.444091] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.116 [2024-07-24 10:54:43.444097] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.116 [2024-07-24 10:54:43.454345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.116 qpair failed and we were unable to recover it. 00:32:36.116 [2024-07-24 10:54:43.463895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.116 [2024-07-24 10:54:43.463938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.116 [2024-07-24 10:54:43.463952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.116 [2024-07-24 10:54:43.463959] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.116 [2024-07-24 10:54:43.463966] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.116 [2024-07-24 10:54:43.474303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.116 qpair failed and we were unable to recover it. 00:32:36.116 [2024-07-24 10:54:43.483932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.116 [2024-07-24 10:54:43.483970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.116 [2024-07-24 10:54:43.483984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.116 [2024-07-24 10:54:43.483991] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.116 [2024-07-24 10:54:43.483996] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.116 [2024-07-24 10:54:43.494085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.116 qpair failed and we were unable to recover it. 00:32:36.116 [2024-07-24 10:54:43.504067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.116 [2024-07-24 10:54:43.504105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.116 [2024-07-24 10:54:43.504122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.116 [2024-07-24 10:54:43.504128] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.116 [2024-07-24 10:54:43.504134] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.116 [2024-07-24 10:54:43.514407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.116 qpair failed and we were unable to recover it. 00:32:36.116 [2024-07-24 10:54:43.524149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.116 [2024-07-24 10:54:43.524184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.116 [2024-07-24 10:54:43.524198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.116 [2024-07-24 10:54:43.524205] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.116 [2024-07-24 10:54:43.524210] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.116 [2024-07-24 10:54:43.534586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.116 qpair failed and we were unable to recover it. 00:32:36.116 [2024-07-24 10:54:43.544196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.116 [2024-07-24 10:54:43.544239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.116 [2024-07-24 10:54:43.544253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.116 [2024-07-24 10:54:43.544259] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.116 [2024-07-24 10:54:43.544265] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.116 [2024-07-24 10:54:43.554585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.116 qpair failed and we were unable to recover it. 00:32:36.116 [2024-07-24 10:54:43.564224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.116 [2024-07-24 10:54:43.564265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.116 [2024-07-24 10:54:43.564279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.116 [2024-07-24 10:54:43.564286] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.116 [2024-07-24 10:54:43.564291] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.375 [2024-07-24 10:54:43.574503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.375 qpair failed and we were unable to recover it. 00:32:36.375 [2024-07-24 10:54:43.584195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.375 [2024-07-24 10:54:43.584232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.375 [2024-07-24 10:54:43.584247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.375 [2024-07-24 10:54:43.584253] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.375 [2024-07-24 10:54:43.584262] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.375 [2024-07-24 10:54:43.594725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.375 qpair failed and we were unable to recover it. 00:32:36.375 [2024-07-24 10:54:43.604332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.375 [2024-07-24 10:54:43.604370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.375 [2024-07-24 10:54:43.604384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.375 [2024-07-24 10:54:43.604390] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.375 [2024-07-24 10:54:43.604396] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.375 [2024-07-24 10:54:43.614731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.375 qpair failed and we were unable to recover it. 00:32:36.375 [2024-07-24 10:54:43.624341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.375 [2024-07-24 10:54:43.624377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.375 [2024-07-24 10:54:43.624390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.375 [2024-07-24 10:54:43.624396] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.375 [2024-07-24 10:54:43.624402] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.375 [2024-07-24 10:54:43.634705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.375 qpair failed and we were unable to recover it. 00:32:36.375 [2024-07-24 10:54:43.644426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.375 [2024-07-24 10:54:43.644465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.375 [2024-07-24 10:54:43.644479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.375 [2024-07-24 10:54:43.644485] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.375 [2024-07-24 10:54:43.644498] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.375 [2024-07-24 10:54:43.654760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.375 qpair failed and we were unable to recover it. 00:32:36.375 [2024-07-24 10:54:43.664511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.375 [2024-07-24 10:54:43.664548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.375 [2024-07-24 10:54:43.664563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.375 [2024-07-24 10:54:43.664569] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.375 [2024-07-24 10:54:43.664575] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.375 [2024-07-24 10:54:43.674849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.375 qpair failed and we were unable to recover it. 00:32:36.375 [2024-07-24 10:54:43.684567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.375 [2024-07-24 10:54:43.684604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.375 [2024-07-24 10:54:43.684617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.375 [2024-07-24 10:54:43.684624] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.375 [2024-07-24 10:54:43.684629] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.375 [2024-07-24 10:54:43.695021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.375 qpair failed and we were unable to recover it. 00:32:36.375 [2024-07-24 10:54:43.704572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.375 [2024-07-24 10:54:43.704611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.375 [2024-07-24 10:54:43.704626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.376 [2024-07-24 10:54:43.704633] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.376 [2024-07-24 10:54:43.704638] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.376 [2024-07-24 10:54:43.715028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.376 qpair failed and we were unable to recover it. 00:32:36.376 [2024-07-24 10:54:43.724745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.376 [2024-07-24 10:54:43.724783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.376 [2024-07-24 10:54:43.724797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.376 [2024-07-24 10:54:43.724804] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.376 [2024-07-24 10:54:43.724809] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.376 [2024-07-24 10:54:43.735124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.376 qpair failed and we were unable to recover it. 00:32:36.376 [2024-07-24 10:54:43.744774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.376 [2024-07-24 10:54:43.744815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.376 [2024-07-24 10:54:43.744829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.376 [2024-07-24 10:54:43.744835] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.376 [2024-07-24 10:54:43.744841] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.376 [2024-07-24 10:54:43.755131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.376 qpair failed and we were unable to recover it. 00:32:36.376 [2024-07-24 10:54:43.764749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.376 [2024-07-24 10:54:43.764789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.376 [2024-07-24 10:54:43.764808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.376 [2024-07-24 10:54:43.764814] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.376 [2024-07-24 10:54:43.764820] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.376 [2024-07-24 10:54:43.775151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.376 qpair failed and we were unable to recover it. 00:32:36.376 [2024-07-24 10:54:43.784809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.376 [2024-07-24 10:54:43.784847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.376 [2024-07-24 10:54:43.784861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.376 [2024-07-24 10:54:43.784867] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.376 [2024-07-24 10:54:43.784873] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.376 [2024-07-24 10:54:43.795103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.376 qpair failed and we were unable to recover it. 00:32:36.376 [2024-07-24 10:54:43.804926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.376 [2024-07-24 10:54:43.804964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.376 [2024-07-24 10:54:43.804979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.376 [2024-07-24 10:54:43.804985] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.376 [2024-07-24 10:54:43.804991] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.376 [2024-07-24 10:54:43.814921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.376 qpair failed and we were unable to recover it. 00:32:36.376 [2024-07-24 10:54:43.825017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.376 [2024-07-24 10:54:43.825049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.376 [2024-07-24 10:54:43.825064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.376 [2024-07-24 10:54:43.825071] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.376 [2024-07-24 10:54:43.825077] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.635 [2024-07-24 10:54:43.835308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.635 qpair failed and we were unable to recover it. 00:32:36.635 [2024-07-24 10:54:43.845112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.635 [2024-07-24 10:54:43.845148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.635 [2024-07-24 10:54:43.845166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.635 [2024-07-24 10:54:43.845172] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.635 [2024-07-24 10:54:43.845178] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.635 [2024-07-24 10:54:43.855396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.635 qpair failed and we were unable to recover it. 00:32:36.635 [2024-07-24 10:54:43.865069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.635 [2024-07-24 10:54:43.865104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.635 [2024-07-24 10:54:43.865118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.635 [2024-07-24 10:54:43.865124] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.635 [2024-07-24 10:54:43.865130] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.635 [2024-07-24 10:54:43.875353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.635 qpair failed and we were unable to recover it. 00:32:36.635 [2024-07-24 10:54:43.885124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.635 [2024-07-24 10:54:43.885163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.635 [2024-07-24 10:54:43.885177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.635 [2024-07-24 10:54:43.885184] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.635 [2024-07-24 10:54:43.885189] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.635 [2024-07-24 10:54:43.895603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.635 qpair failed and we were unable to recover it. 00:32:36.635 [2024-07-24 10:54:43.905237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.635 [2024-07-24 10:54:43.905275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.635 [2024-07-24 10:54:43.905288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.635 [2024-07-24 10:54:43.905294] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.635 [2024-07-24 10:54:43.905300] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.635 [2024-07-24 10:54:43.915532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.635 qpair failed and we were unable to recover it. 00:32:36.635 [2024-07-24 10:54:43.925216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.635 [2024-07-24 10:54:43.925254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.635 [2024-07-24 10:54:43.925269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.635 [2024-07-24 10:54:43.925275] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.635 [2024-07-24 10:54:43.925281] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.635 [2024-07-24 10:54:43.935689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.635 qpair failed and we were unable to recover it. 00:32:36.636 [2024-07-24 10:54:43.945315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.636 [2024-07-24 10:54:43.945359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.636 [2024-07-24 10:54:43.945376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.636 [2024-07-24 10:54:43.945382] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.636 [2024-07-24 10:54:43.945388] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.636 [2024-07-24 10:54:43.955853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.636 qpair failed and we were unable to recover it. 00:32:36.636 [2024-07-24 10:54:43.965416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.636 [2024-07-24 10:54:43.965455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.636 [2024-07-24 10:54:43.965468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.636 [2024-07-24 10:54:43.965475] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.636 [2024-07-24 10:54:43.965481] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.636 [2024-07-24 10:54:43.975739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.636 qpair failed and we were unable to recover it. 00:32:36.636 [2024-07-24 10:54:43.985520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.636 [2024-07-24 10:54:43.985556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.636 [2024-07-24 10:54:43.985570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.636 [2024-07-24 10:54:43.985576] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.636 [2024-07-24 10:54:43.985582] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.636 [2024-07-24 10:54:43.995702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.636 qpair failed and we were unable to recover it. 00:32:36.636 [2024-07-24 10:54:44.005524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.636 [2024-07-24 10:54:44.005562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.636 [2024-07-24 10:54:44.005577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.636 [2024-07-24 10:54:44.005584] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.636 [2024-07-24 10:54:44.005590] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.636 [2024-07-24 10:54:44.015974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.636 qpair failed and we were unable to recover it. 00:32:36.636 [2024-07-24 10:54:44.025638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.636 [2024-07-24 10:54:44.025679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.636 [2024-07-24 10:54:44.025693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.636 [2024-07-24 10:54:44.025700] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.636 [2024-07-24 10:54:44.025708] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.636 [2024-07-24 10:54:44.035985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.636 qpair failed and we were unable to recover it. 00:32:36.636 [2024-07-24 10:54:44.045712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.636 [2024-07-24 10:54:44.045749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.636 [2024-07-24 10:54:44.045763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.636 [2024-07-24 10:54:44.045770] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.636 [2024-07-24 10:54:44.045775] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.636 [2024-07-24 10:54:44.056037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.636 qpair failed and we were unable to recover it. 00:32:36.636 [2024-07-24 10:54:44.065630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.636 [2024-07-24 10:54:44.065667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.636 [2024-07-24 10:54:44.065681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.636 [2024-07-24 10:54:44.065687] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.636 [2024-07-24 10:54:44.065693] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.636 [2024-07-24 10:54:44.076097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.636 qpair failed and we were unable to recover it. 00:32:36.636 [2024-07-24 10:54:44.085721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.636 [2024-07-24 10:54:44.085756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.636 [2024-07-24 10:54:44.085770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.636 [2024-07-24 10:54:44.085776] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.636 [2024-07-24 10:54:44.085782] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:36.894 [2024-07-24 10:54:44.096125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:36.894 qpair failed and we were unable to recover it. 00:32:37.826 Write completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Write completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Read completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Write completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Read completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Read completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Read completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Write completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Write completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Write completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Write completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Write completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Read completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Write completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Read completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Write completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Read completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Write completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Read completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Write completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Read completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Read completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Read completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Write completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Read completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Write completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Write completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Read completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Read completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Write completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Read completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 Write completed with error (sct=0, sc=8) 00:32:37.826 starting I/O failed 00:32:37.826 [2024-07-24 10:54:45.101376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:37.826 [2024-07-24 10:54:45.108761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.826 [2024-07-24 10:54:45.108804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.826 [2024-07-24 10:54:45.108821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.826 [2024-07-24 10:54:45.108828] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.826 [2024-07-24 10:54:45.108834] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:32:37.826 [2024-07-24 10:54:45.119399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:37.826 qpair failed and we were unable to recover it. 00:32:37.826 [2024-07-24 10:54:45.128959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.826 [2024-07-24 10:54:45.128994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.826 [2024-07-24 10:54:45.129009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.826 [2024-07-24 10:54:45.129016] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.826 [2024-07-24 10:54:45.129022] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:32:37.826 [2024-07-24 10:54:45.139514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:37.826 qpair failed and we were unable to recover it. 00:32:37.826 [2024-07-24 10:54:45.149039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.826 [2024-07-24 10:54:45.149075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.826 [2024-07-24 10:54:45.149094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.826 [2024-07-24 10:54:45.149102] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.826 [2024-07-24 10:54:45.149108] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:32:37.826 [2024-07-24 10:54:45.159380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.826 qpair failed and we were unable to recover it. 00:32:37.826 [2024-07-24 10:54:45.169061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.826 [2024-07-24 10:54:45.169104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.826 [2024-07-24 10:54:45.169119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.826 [2024-07-24 10:54:45.169125] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.826 [2024-07-24 10:54:45.169131] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:32:37.826 [2024-07-24 10:54:45.179423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.826 qpair failed and we were unable to recover it. 00:32:37.826 [2024-07-24 10:54:45.179548] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:32:37.826 A controller has encountered a failure and is being reset. 00:32:37.826 [2024-07-24 10:54:45.189204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.826 [2024-07-24 10:54:45.189256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.826 [2024-07-24 10:54:45.189283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.826 [2024-07-24 10:54:45.189295] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.826 [2024-07-24 10:54:45.189305] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:32:37.826 [2024-07-24 10:54:45.199449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.826 qpair failed and we were unable to recover it. 00:32:37.826 [2024-07-24 10:54:45.209227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.826 [2024-07-24 10:54:45.209260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.826 [2024-07-24 10:54:45.209275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.826 [2024-07-24 10:54:45.209283] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.827 [2024-07-24 10:54:45.209289] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:32:37.827 [2024-07-24 10:54:45.219253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.827 qpair failed and we were unable to recover it. 00:32:37.827 [2024-07-24 10:54:45.219418] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:32:37.827 [2024-07-24 10:54:45.250977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:37.827 Controller properly reset. 00:32:38.085 Initializing NVMe Controllers 00:32:38.085 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:38.085 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:38.085 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:38.085 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:38.085 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:38.085 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:38.085 Initialization complete. Launching workers. 00:32:38.085 Starting thread on core 1 00:32:38.085 Starting thread on core 2 00:32:38.085 Starting thread on core 3 00:32:38.085 Starting thread on core 0 00:32:38.085 10:54:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:32:38.085 00:32:38.085 real 0m11.927s 00:32:38.085 user 0m25.617s 00:32:38.085 sys 0m1.971s 00:32:38.085 10:54:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:38.085 10:54:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:38.085 ************************************ 00:32:38.085 END TEST nvmf_target_disconnect_tc2 00:32:38.085 ************************************ 00:32:38.085 10:54:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:32:38.085 10:54:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:32:38.085 10:54:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:38.085 10:54:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:38.085 10:54:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:38.085 ************************************ 00:32:38.085 START TEST nvmf_target_disconnect_tc3 00:32:38.085 ************************************ 00:32:38.085 10:54:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc3 00:32:38.085 10:54:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=2418679 00:32:38.085 10:54:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:32:38.085 10:54:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:32:38.085 EAL: No free 2048 kB hugepages reported on node 1 00:32:39.987 10:54:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 2417491 00:32:39.987 10:54:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:32:41.365 Read completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Read completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Read completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Write completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Read completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Write completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Write completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Write completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Read completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Write completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Read completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Write completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Read completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Write completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Read completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Write completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Read completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Write completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Write completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Write completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Read completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Read completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Read completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Write completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Write completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Write completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Read completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Write completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Read completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Write completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Write completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 Write completed with error (sct=0, sc=8) 00:32:41.365 starting I/O failed 00:32:41.365 [2024-07-24 10:54:48.535688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:41.933 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 2417491 Killed "${NVMF_APP[@]}" "$@" 00:32:41.933 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:32:41.933 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:41.933 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:41.933 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:42.192 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:42.192 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2419363 00:32:42.192 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2419363 00:32:42.192 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:42.192 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2419363 ']' 00:32:42.192 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:42.192 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:42.192 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:42.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:42.192 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:42.192 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:42.192 [2024-07-24 10:54:49.433086] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:32:42.192 [2024-07-24 10:54:49.433133] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:42.192 EAL: No free 2048 kB hugepages reported on node 1 00:32:42.192 [2024-07-24 10:54:49.501167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:42.192 Read completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Read completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Read completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Write completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Write completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Write completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Write completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Write completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Read completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Read completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Read completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Read completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Read completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Write completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Write completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Read completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Write completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Read completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Write completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Write completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Write completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Read completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Read completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Write completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Write completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Read completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Write completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Write completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Read completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Write completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Read completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 Read completed with error (sct=0, sc=8) 00:32:42.192 starting I/O failed 00:32:42.192 [2024-07-24 10:54:49.540594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.192 [2024-07-24 10:54:49.540748] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:42.192 [2024-07-24 10:54:49.540778] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:42.192 [2024-07-24 10:54:49.540784] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:42.192 [2024-07-24 10:54:49.540790] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:42.192 [2024-07-24 10:54:49.540795] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:42.192 [2024-07-24 10:54:49.540905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:32:42.192 [2024-07-24 10:54:49.541012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:32:42.192 [2024-07-24 10:54:49.541097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:32:42.192 [2024-07-24 10:54:49.541098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:32:42.192 [2024-07-24 10:54:49.542201] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:42.192 [2024-07-24 10:54:49.542219] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:42.192 [2024-07-24 10:54:49.542227] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:32:42.451 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:42.451 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # return 0 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:42.452 Malloc0 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:42.452 [2024-07-24 10:54:49.731494] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1286fb0/0x1292c80) succeed. 00:32:42.452 [2024-07-24 10:54:49.740846] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12885a0/0x1332d80) succeed. 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:42.452 [2024-07-24 10:54:49.881499] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.452 10:54:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 2418679 00:32:43.389 [2024-07-24 10:54:50.546286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.389 qpair failed and we were unable to recover it. 00:32:43.389 [2024-07-24 10:54:50.547853] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:43.389 [2024-07-24 10:54:50.547870] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:43.389 [2024-07-24 10:54:50.547876] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:32:44.325 [2024-07-24 10:54:51.551701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.325 qpair failed and we were unable to recover it. 00:32:44.325 [2024-07-24 10:54:51.553235] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:44.325 [2024-07-24 10:54:51.553255] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:44.325 [2024-07-24 10:54:51.553261] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:32:45.261 [2024-07-24 10:54:52.557044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.261 qpair failed and we were unable to recover it. 00:32:45.262 [2024-07-24 10:54:52.558564] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:45.262 [2024-07-24 10:54:52.558580] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:45.262 [2024-07-24 10:54:52.558585] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:32:46.198 [2024-07-24 10:54:53.562500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.198 qpair failed and we were unable to recover it. 00:32:46.198 [2024-07-24 10:54:53.563986] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:46.198 [2024-07-24 10:54:53.564002] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:46.198 [2024-07-24 10:54:53.564007] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:32:47.132 [2024-07-24 10:54:54.567734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-07-24 10:54:54.569152] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:47.132 [2024-07-24 10:54:54.569167] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:47.132 [2024-07-24 10:54:54.569173] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:32:48.508 [2024-07-24 10:54:55.573141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.508 qpair failed and we were unable to recover it. 00:32:48.508 [2024-07-24 10:54:55.574617] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:48.508 [2024-07-24 10:54:55.574633] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:48.508 [2024-07-24 10:54:55.574639] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:32:49.444 [2024-07-24 10:54:56.578480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.444 qpair failed and we were unable to recover it. 00:32:50.379 Write completed with error (sct=0, sc=8) 00:32:50.379 starting I/O failed 00:32:50.379 Read completed with error (sct=0, sc=8) 00:32:50.379 starting I/O failed 00:32:50.379 Write completed with error (sct=0, sc=8) 00:32:50.379 starting I/O failed 00:32:50.379 Read completed with error (sct=0, sc=8) 00:32:50.379 starting I/O failed 00:32:50.379 Write completed with error (sct=0, sc=8) 00:32:50.379 starting I/O failed 00:32:50.379 Write completed with error (sct=0, sc=8) 00:32:50.379 starting I/O failed 00:32:50.379 Read completed with error (sct=0, sc=8) 00:32:50.379 starting I/O failed 00:32:50.379 Read completed with error (sct=0, sc=8) 00:32:50.379 starting I/O failed 00:32:50.379 Read completed with error (sct=0, sc=8) 00:32:50.379 starting I/O failed 00:32:50.379 Write completed with error (sct=0, sc=8) 00:32:50.379 starting I/O failed 00:32:50.379 Write completed with error (sct=0, sc=8) 00:32:50.379 starting I/O failed 00:32:50.379 Read completed with error (sct=0, sc=8) 00:32:50.379 starting I/O failed 00:32:50.379 Read completed with error (sct=0, sc=8) 00:32:50.379 starting I/O failed 00:32:50.379 Read completed with error (sct=0, sc=8) 00:32:50.379 starting I/O failed 00:32:50.379 Read completed with error (sct=0, sc=8) 00:32:50.379 starting I/O failed 00:32:50.379 Write completed with error (sct=0, sc=8) 00:32:50.379 starting I/O failed 00:32:50.379 Write completed with error (sct=0, sc=8) 00:32:50.379 starting I/O failed 00:32:50.379 Write completed with error (sct=0, sc=8) 00:32:50.379 starting I/O failed 00:32:50.379 Write completed with error (sct=0, sc=8) 00:32:50.379 starting I/O failed 00:32:50.379 Write completed with error (sct=0, sc=8) 00:32:50.379 starting I/O failed 00:32:50.379 Read completed with error (sct=0, sc=8) 00:32:50.380 starting I/O failed 00:32:50.380 Read completed with error (sct=0, sc=8) 00:32:50.380 starting I/O failed 00:32:50.380 Read completed with error (sct=0, sc=8) 00:32:50.380 starting I/O failed 00:32:50.380 Read completed with error (sct=0, sc=8) 00:32:50.380 starting I/O failed 00:32:50.380 Read completed with error (sct=0, sc=8) 00:32:50.380 starting I/O failed 00:32:50.380 Write completed with error (sct=0, sc=8) 00:32:50.380 starting I/O failed 00:32:50.380 Write completed with error (sct=0, sc=8) 00:32:50.380 starting I/O failed 00:32:50.380 Read completed with error (sct=0, sc=8) 00:32:50.380 starting I/O failed 00:32:50.380 Write completed with error (sct=0, sc=8) 00:32:50.380 starting I/O failed 00:32:50.380 Read completed with error (sct=0, sc=8) 00:32:50.380 starting I/O failed 00:32:50.380 Write completed with error (sct=0, sc=8) 00:32:50.380 starting I/O failed 00:32:50.380 Write completed with error (sct=0, sc=8) 00:32:50.380 starting I/O failed 00:32:50.380 [2024-07-24 10:54:57.583515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.380 [2024-07-24 10:54:57.584904] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:50.380 [2024-07-24 10:54:57.584920] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:50.380 [2024-07-24 10:54:57.584926] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:51.314 [2024-07-24 10:54:58.588871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.314 qpair failed and we were unable to recover it. 00:32:51.314 [2024-07-24 10:54:58.590305] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:51.314 [2024-07-24 10:54:58.590320] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:51.314 [2024-07-24 10:54:58.590326] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:32:52.249 [2024-07-24 10:54:59.594103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.249 qpair failed and we were unable to recover it. 00:32:53.184 Write completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Read completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Read completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Read completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Write completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Read completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Read completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Read completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Write completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Read completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Read completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Read completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Read completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Write completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Write completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Read completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Read completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Write completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Read completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Write completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Write completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Write completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Read completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Read completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Write completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Write completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Read completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Read completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Write completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Write completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Read completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 Write completed with error (sct=0, sc=8) 00:32:53.184 starting I/O failed 00:32:53.184 [2024-07-24 10:55:00.599240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:53.184 [2024-07-24 10:55:00.601107] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:53.184 [2024-07-24 10:55:00.601123] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:53.184 [2024-07-24 10:55:00.601129] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:32:54.559 [2024-07-24 10:55:01.604985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:54.559 qpair failed and we were unable to recover it. 00:32:54.559 [2024-07-24 10:55:01.606391] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:54.559 [2024-07-24 10:55:01.606407] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:54.559 [2024-07-24 10:55:01.606413] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:32:55.495 [2024-07-24 10:55:02.610422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:55.495 qpair failed and we were unable to recover it. 00:32:55.495 [2024-07-24 10:55:02.610543] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:32:55.495 A controller has encountered a failure and is being reset. 00:32:55.495 Resorting to new failover address 192.168.100.9 00:32:55.495 [2024-07-24 10:55:02.612263] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:55.495 [2024-07-24 10:55:02.612290] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:55.495 [2024-07-24 10:55:02.612300] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:32:56.430 [2024-07-24 10:55:03.616119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:56.430 qpair failed and we were unable to recover it. 00:32:56.430 [2024-07-24 10:55:03.617654] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:56.430 [2024-07-24 10:55:03.617669] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:56.430 [2024-07-24 10:55:03.617675] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:32:57.365 [2024-07-24 10:55:04.621502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:57.365 qpair failed and we were unable to recover it. 00:32:57.365 [2024-07-24 10:55:04.621617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.365 [2024-07-24 10:55:04.621709] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:32:57.365 [2024-07-24 10:55:04.652704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:57.365 Controller properly reset. 00:32:57.365 Initializing NVMe Controllers 00:32:57.365 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:57.365 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:57.365 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:57.365 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:57.365 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:57.365 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:57.365 Initialization complete. Launching workers. 00:32:57.365 Starting thread on core 1 00:32:57.365 Starting thread on core 2 00:32:57.365 Starting thread on core 3 00:32:57.365 Starting thread on core 0 00:32:57.365 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:32:57.365 00:32:57.365 real 0m19.327s 00:32:57.365 user 1m6.094s 00:32:57.365 sys 0m3.884s 00:32:57.365 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:57.365 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:57.365 ************************************ 00:32:57.365 END TEST nvmf_target_disconnect_tc3 00:32:57.365 ************************************ 00:32:57.365 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:32:57.365 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:32:57.365 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:57.365 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:32:57.365 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:32:57.365 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:32:57.365 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:32:57.365 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:57.365 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:32:57.365 rmmod nvme_rdma 00:32:57.365 rmmod nvme_fabrics 00:32:57.365 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:57.365 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:32:57.365 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:32:57.365 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2419363 ']' 00:32:57.365 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2419363 00:32:57.365 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2419363 ']' 00:32:57.365 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 2419363 00:32:57.365 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:32:57.365 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:57.365 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2419363 00:32:57.624 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:32:57.624 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:32:57.624 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2419363' 00:32:57.624 killing process with pid 2419363 00:32:57.624 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 2419363 00:32:57.624 10:55:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 2419363 00:32:57.883 10:55:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:57.883 10:55:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:32:57.883 00:32:57.883 real 0m38.072s 00:32:57.883 user 2m35.808s 00:32:57.883 sys 0m10.184s 00:32:57.883 10:55:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:57.883 10:55:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:57.883 ************************************ 00:32:57.883 END TEST nvmf_target_disconnect 00:32:57.883 ************************************ 00:32:57.883 10:55:05 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:57.883 00:32:57.883 real 6m38.236s 00:32:57.883 user 19m49.301s 00:32:57.883 sys 1m18.810s 00:32:57.883 10:55:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:57.883 10:55:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.883 ************************************ 00:32:57.883 END TEST nvmf_host 00:32:57.883 ************************************ 00:32:57.883 00:32:57.883 real 26m31.069s 00:32:57.883 user 78m59.833s 00:32:57.883 sys 5m2.930s 00:32:57.883 10:55:05 nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:57.883 10:55:05 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:32:57.883 ************************************ 00:32:57.883 END TEST nvmf_rdma 00:32:57.883 ************************************ 00:32:57.883 10:55:05 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:32:57.883 10:55:05 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:57.883 10:55:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:57.883 10:55:05 -- common/autotest_common.sh@10 -- # set +x 00:32:57.883 ************************************ 00:32:57.883 START TEST spdkcli_nvmf_rdma 00:32:57.883 ************************************ 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:32:57.883 * Looking for test storage... 00:32:57.883 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:57.883 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2421998 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 2421998 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- common/autotest_common.sh@831 -- # '[' -z 2421998 ']' 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:58.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:32:58.142 [2024-07-24 10:55:05.388586] Starting SPDK v24.09-pre git sha1 8711e7e9b / DPDK 22.11.4 initialization... 00:32:58.142 [2024-07-24 10:55:05.388633] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2421998 ] 00:32:58.142 EAL: No free 2048 kB hugepages reported on node 1 00:32:58.142 [2024-07-24 10:55:05.441708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:58.142 [2024-07-24 10:55:05.482755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.142 [2024-07-24 10:55:05.482757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # return 0 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:58.142 10:55:05 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:32:58.402 10:55:05 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:58.402 10:55:05 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:32:58.402 10:55:05 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:32:58.402 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:32:58.402 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:58.402 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:58.402 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:58.402 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:58.402 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:58.402 10:55:05 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:58.402 10:55:05 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:58.402 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:58.402 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:58.402 10:55:05 spdkcli_nvmf_rdma -- nvmf/common.sh@285 -- # xtrace_disable 00:32:58.402 10:55:05 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:03.710 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:03.710 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # pci_devs=() 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # net_devs=() 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # e810=() 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # local -ga e810 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # x722=() 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # local -ga x722 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # mlx=() 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # local -ga mlx 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:33:03.711 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:33:03.711 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:33:03.711 Found net devices under 0000:da:00.0: mlx_0_0 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:33:03.711 Found net devices under 0000:da:00.1: mlx_0_1 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # is_hw=yes 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # rdma_device_init 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # uname 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@63 -- # modprobe ib_core 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:33:03.711 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:03.711 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:33:03.711 altname enp218s0f0np0 00:33:03.711 altname ens818f0np0 00:33:03.711 inet 192.168.100.8/24 scope global mlx_0_0 00:33:03.711 valid_lft forever preferred_lft forever 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:33:03.711 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:33:03.711 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:03.711 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:33:03.711 altname enp218s0f1np1 00:33:03.711 altname ens818f1np1 00:33:03.711 inet 192.168.100.9/24 scope global mlx_0_1 00:33:03.711 valid_lft forever preferred_lft forever 00:33:03.712 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # return 0 00:33:03.712 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:03.712 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:03.712 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:33:03.712 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:33:03.712 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:33:03.712 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:03.712 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:03.712 10:55:10 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:33:03.712 192.168.100.9' 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:33:03.712 192.168.100.9' 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # head -n 1 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:33:03.712 192.168.100.9' 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # tail -n +2 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # head -n 1 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:03.712 10:55:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:03.712 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:03.712 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:03.712 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:03.712 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:03.712 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:03.712 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:03.712 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:03.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:03.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:03.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:33:03.712 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:03.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:03.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:33:03.712 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:03.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:03.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:33:03.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:33:03.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:03.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:03.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:03.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:03.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:33:03.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:33:03.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:03.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:03.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:03.712 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:03.712 ' 00:33:06.264 [2024-07-24 10:55:13.489298] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1740760/0x15ddb40) succeed. 00:33:06.264 [2024-07-24 10:55:13.498763] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1741c60/0x1628b40) succeed. 00:33:07.637 [2024-07-24 10:55:14.728333] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:33:09.540 [2024-07-24 10:55:16.891219] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:33:11.443 [2024-07-24 10:55:18.749269] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:33:12.820 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:12.820 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:12.820 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:12.820 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:12.820 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:12.820 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:12.820 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:12.820 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:12.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:12.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:12.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:33:12.820 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:12.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:12.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:33:12.820 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:12.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:12.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:33:12.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:33:12.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:12.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:12.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:12.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:12.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:33:12.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:33:12.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:12.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:12.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:12.820 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:13.078 10:55:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:13.078 10:55:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:13.078 10:55:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:13.078 10:55:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:13.078 10:55:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:13.078 10:55:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:13.078 10:55:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:33:13.078 10:55:20 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:13.337 10:55:20 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:13.337 10:55:20 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:13.337 10:55:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:13.337 10:55:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:13.337 10:55:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:13.337 10:55:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:13.337 10:55:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:13.337 10:55:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:13.337 10:55:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:13.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:13.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:13.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:13.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:33:13.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:33:13.337 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:13.337 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:13.337 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:13.337 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:13.337 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:13.338 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:13.338 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:13.338 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:13.338 ' 00:33:18.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:18.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:18.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:18.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:18.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:33:18.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:33:18.608 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:18.608 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:18.608 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:18.608 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:18.608 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:18.608 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:18.608 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:18.608 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:18.867 10:55:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:18.867 10:55:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:18.867 10:55:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:18.867 10:55:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 2421998 00:33:18.867 10:55:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@950 -- # '[' -z 2421998 ']' 00:33:18.867 10:55:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # kill -0 2421998 00:33:18.867 10:55:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # uname 00:33:18.867 10:55:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:18.867 10:55:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2421998 00:33:18.867 10:55:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:18.867 10:55:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:18.867 10:55:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2421998' 00:33:18.867 killing process with pid 2421998 00:33:18.867 10:55:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@969 -- # kill 2421998 00:33:18.867 10:55:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@974 -- # wait 2421998 00:33:19.127 10:55:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:33:19.127 10:55:26 spdkcli_nvmf_rdma -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:19.127 10:55:26 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # sync 00:33:19.127 10:55:26 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:33:19.127 10:55:26 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:33:19.127 10:55:26 spdkcli_nvmf_rdma -- nvmf/common.sh@120 -- # set +e 00:33:19.127 10:55:26 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:19.127 10:55:26 spdkcli_nvmf_rdma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:33:19.127 rmmod nvme_rdma 00:33:19.127 rmmod nvme_fabrics 00:33:19.127 10:55:26 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:19.127 10:55:26 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set -e 00:33:19.127 10:55:26 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # return 0 00:33:19.127 10:55:26 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:33:19.127 10:55:26 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:19.127 10:55:26 spdkcli_nvmf_rdma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:33:19.127 00:33:19.127 real 0m21.261s 00:33:19.127 user 0m45.484s 00:33:19.127 sys 0m4.958s 00:33:19.127 10:55:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:19.127 10:55:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:19.127 ************************************ 00:33:19.127 END TEST spdkcli_nvmf_rdma 00:33:19.127 ************************************ 00:33:19.127 10:55:26 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:19.127 10:55:26 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:19.127 10:55:26 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:33:19.127 10:55:26 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:33:19.127 10:55:26 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:33:19.127 10:55:26 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:19.127 10:55:26 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:19.127 10:55:26 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:19.127 10:55:26 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:33:19.127 10:55:26 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:19.127 10:55:26 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:33:19.127 10:55:26 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:19.127 10:55:26 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:19.127 10:55:26 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:19.127 10:55:26 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:33:19.127 10:55:26 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:33:19.127 10:55:26 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:33:19.127 10:55:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:19.127 10:55:26 -- common/autotest_common.sh@10 -- # set +x 00:33:19.127 10:55:26 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:33:19.127 10:55:26 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:19.127 10:55:26 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:19.127 10:55:26 -- common/autotest_common.sh@10 -- # set +x 00:33:23.317 INFO: APP EXITING 00:33:23.317 INFO: killing all VMs 00:33:23.317 INFO: killing vhost app 00:33:23.317 INFO: EXIT DONE 00:33:25.849 Waiting for block devices as requested 00:33:25.849 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:33:25.849 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:25.849 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:25.849 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:25.849 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:25.849 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:26.107 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:26.107 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:26.107 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:26.365 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:26.365 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:26.365 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:26.365 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:26.624 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:26.624 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:26.624 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:26.624 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:29.158 Cleaning 00:33:29.158 Removing: /var/run/dpdk/spdk0/config 00:33:29.158 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:29.158 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:29.158 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:29.158 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:29.158 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:29.158 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:29.158 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:29.158 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:29.158 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:29.158 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:29.158 Removing: /var/run/dpdk/spdk1/config 00:33:29.158 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:29.158 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:29.158 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:29.158 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:29.158 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:29.158 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:29.158 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:29.158 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:29.158 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:29.158 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:29.158 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:29.158 Removing: /var/run/dpdk/spdk2/config 00:33:29.158 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:29.158 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:29.158 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:29.158 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:29.158 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:29.158 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:29.158 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:29.158 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:29.158 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:29.158 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:29.158 Removing: /var/run/dpdk/spdk3/config 00:33:29.158 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:29.158 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:29.158 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:29.158 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:29.158 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:29.158 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:29.158 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:29.158 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:29.158 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:29.158 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:29.158 Removing: /var/run/dpdk/spdk4/config 00:33:29.158 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:29.159 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:29.159 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:29.159 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:29.159 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:29.159 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:29.159 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:29.159 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:29.159 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:29.159 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:29.418 Removing: /dev/shm/bdevperf_trace.pid2077332 00:33:29.418 Removing: /dev/shm/bdevperf_trace.pid2325665 00:33:29.418 Removing: /dev/shm/bdev_svc_trace.1 00:33:29.418 Removing: /dev/shm/nvmf_trace.0 00:33:29.418 Removing: /dev/shm/spdk_tgt_trace.pid2037371 00:33:29.418 Removing: /var/run/dpdk/spdk0 00:33:29.418 Removing: /var/run/dpdk/spdk1 00:33:29.418 Removing: /var/run/dpdk/spdk2 00:33:29.418 Removing: /var/run/dpdk/spdk3 00:33:29.418 Removing: /var/run/dpdk/spdk4 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2034524 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2035717 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2037371 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2037806 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2038747 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2038935 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2039933 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2039963 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2040185 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2044821 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2046142 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2046504 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2046661 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2046954 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2047234 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2047484 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2047638 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2047876 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2048575 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2051519 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2051776 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2051823 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2052030 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2052310 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2052421 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2052812 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2052819 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2053195 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2053301 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2053462 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2053564 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2053908 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2054158 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2054446 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2058123 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2061913 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2071976 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2072835 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2077332 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2077578 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2081494 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2087523 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2090038 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2099278 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2122067 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2125508 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2215396 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2220209 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2225499 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2233077 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2279199 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2284401 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2323960 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2324801 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2325665 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2329569 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2335952 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2336868 00:33:29.418 Removing: /var/run/dpdk/spdk_pid2337783 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2338686 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2338937 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2343147 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2343149 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2347206 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2347859 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2348325 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2349008 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2349025 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2351092 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2352919 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2354741 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2356584 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2358352 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2360103 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2366357 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2366877 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2368901 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2369810 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2375798 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2378407 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2383379 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2392589 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2392591 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2410753 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2411038 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2416519 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2416804 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2418679 00:33:29.678 Removing: /var/run/dpdk/spdk_pid2421998 00:33:29.678 Clean 00:33:29.678 10:55:37 -- common/autotest_common.sh@1451 -- # return 0 00:33:29.678 10:55:37 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:33:29.678 10:55:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:29.678 10:55:37 -- common/autotest_common.sh@10 -- # set +x 00:33:29.678 10:55:37 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:33:29.678 10:55:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:29.678 10:55:37 -- common/autotest_common.sh@10 -- # set +x 00:33:29.678 10:55:37 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:33:29.678 10:55:37 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:33:29.678 10:55:37 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:33:29.678 10:55:37 -- spdk/autotest.sh@395 -- # hash lcov 00:33:29.678 10:55:37 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:29.678 10:55:37 -- spdk/autotest.sh@397 -- # hostname 00:33:29.937 10:55:37 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-05 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:33:29.937 geninfo: WARNING: invalid characters removed from testname! 00:33:51.870 10:55:55 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:33:51.870 10:55:57 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:33:52.498 10:55:59 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:33:53.873 10:56:01 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:33:55.777 10:56:03 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:33:57.684 10:56:04 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:33:59.062 10:56:06 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:59.062 10:56:06 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:59.062 10:56:06 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:59.062 10:56:06 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:59.062 10:56:06 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:59.062 10:56:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.062 10:56:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.062 10:56:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.062 10:56:06 -- paths/export.sh@5 -- $ export PATH 00:33:59.062 10:56:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.062 10:56:06 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:33:59.062 10:56:06 -- common/autobuild_common.sh@447 -- $ date +%s 00:33:59.062 10:56:06 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721811366.XXXXXX 00:33:59.062 10:56:06 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721811366.EiOeWf 00:33:59.062 10:56:06 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:33:59.062 10:56:06 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:33:59.062 10:56:06 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:33:59.062 10:56:06 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:33:59.062 10:56:06 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:59.062 10:56:06 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:59.062 10:56:06 -- common/autobuild_common.sh@463 -- $ get_config_params 00:33:59.062 10:56:06 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:33:59.062 10:56:06 -- common/autotest_common.sh@10 -- $ set +x 00:33:59.062 10:56:06 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:33:59.062 10:56:06 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:33:59.062 10:56:06 -- pm/common@17 -- $ local monitor 00:33:59.062 10:56:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:59.062 10:56:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:59.062 10:56:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:59.062 10:56:06 -- pm/common@21 -- $ date +%s 00:33:59.062 10:56:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:59.062 10:56:06 -- pm/common@21 -- $ date +%s 00:33:59.062 10:56:06 -- pm/common@25 -- $ sleep 1 00:33:59.062 10:56:06 -- pm/common@21 -- $ date +%s 00:33:59.062 10:56:06 -- pm/common@21 -- $ date +%s 00:33:59.062 10:56:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721811366 00:33:59.062 10:56:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721811366 00:33:59.062 10:56:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721811366 00:33:59.062 10:56:06 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721811366 00:33:59.062 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721811366_collect-vmstat.pm.log 00:33:59.062 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721811366_collect-cpu-load.pm.log 00:33:59.062 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721811366_collect-cpu-temp.pm.log 00:33:59.062 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721811366_collect-bmc-pm.bmc.pm.log 00:34:00.000 10:56:07 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:34:00.000 10:56:07 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:34:00.000 10:56:07 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:34:00.000 10:56:07 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:00.000 10:56:07 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:34:00.000 10:56:07 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:00.000 10:56:07 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:00.000 10:56:07 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:00.000 10:56:07 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:00.000 10:56:07 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:34:00.259 10:56:07 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:00.259 10:56:07 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:00.259 10:56:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:00.259 10:56:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:00.259 10:56:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:00.259 10:56:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:00.259 10:56:07 -- pm/common@44 -- $ pid=2436878 00:34:00.259 10:56:07 -- pm/common@50 -- $ kill -TERM 2436878 00:34:00.259 10:56:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:00.259 10:56:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:00.259 10:56:07 -- pm/common@44 -- $ pid=2436880 00:34:00.259 10:56:07 -- pm/common@50 -- $ kill -TERM 2436880 00:34:00.259 10:56:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:00.259 10:56:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:00.259 10:56:07 -- pm/common@44 -- $ pid=2436881 00:34:00.259 10:56:07 -- pm/common@50 -- $ kill -TERM 2436881 00:34:00.259 10:56:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:00.259 10:56:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:00.259 10:56:07 -- pm/common@44 -- $ pid=2436904 00:34:00.259 10:56:07 -- pm/common@50 -- $ sudo -E kill -TERM 2436904 00:34:00.259 + [[ -n 1917390 ]] 00:34:00.259 + sudo kill 1917390 00:34:00.268 [Pipeline] } 00:34:00.285 [Pipeline] // stage 00:34:00.289 [Pipeline] } 00:34:00.299 [Pipeline] // timeout 00:34:00.304 [Pipeline] } 00:34:00.317 [Pipeline] // catchError 00:34:00.321 [Pipeline] } 00:34:00.334 [Pipeline] // wrap 00:34:00.339 [Pipeline] } 00:34:00.349 [Pipeline] // catchError 00:34:00.356 [Pipeline] stage 00:34:00.357 [Pipeline] { (Epilogue) 00:34:00.367 [Pipeline] catchError 00:34:00.368 [Pipeline] { 00:34:00.379 [Pipeline] echo 00:34:00.380 Cleanup processes 00:34:00.384 [Pipeline] sh 00:34:00.664 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:34:00.664 2437005 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:34:00.664 2437277 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:34:00.679 [Pipeline] sh 00:34:00.964 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:34:00.964 ++ grep -v 'sudo pgrep' 00:34:00.964 ++ awk '{print $1}' 00:34:00.964 + sudo kill -9 2437005 00:34:00.976 [Pipeline] sh 00:34:01.262 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:11.259 [Pipeline] sh 00:34:11.543 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:11.543 Artifacts sizes are good 00:34:11.559 [Pipeline] archiveArtifacts 00:34:11.567 Archiving artifacts 00:34:11.775 [Pipeline] sh 00:34:12.084 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:34:12.099 [Pipeline] cleanWs 00:34:12.108 [WS-CLEANUP] Deleting project workspace... 00:34:12.108 [WS-CLEANUP] Deferred wipeout is used... 00:34:12.115 [WS-CLEANUP] done 00:34:12.117 [Pipeline] } 00:34:12.139 [Pipeline] // catchError 00:34:12.153 [Pipeline] sh 00:34:12.435 + logger -p user.info -t JENKINS-CI 00:34:12.444 [Pipeline] } 00:34:12.463 [Pipeline] // stage 00:34:12.469 [Pipeline] } 00:34:12.487 [Pipeline] // node 00:34:12.493 [Pipeline] End of Pipeline 00:34:12.525 Finished: SUCCESS